report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
The Competition in Contracting Act (CICA) of 1984 requires agencies to obtain full and open competition through the use of competitive procedures in their procurement activities unless otherwise authorized by law. However, Congress also recognized that in certain situations contracts may need to be awarded noncompetitively—that is, without full and open competition. Generally, these contracts must be supported by written justifications and approvals that contain sufficient facts and rationale to justify the use of a specific exception to full and open competition, such as when the contractor is the only source capable of performing the work.program fall under one of these exceptions but were not previously required to include a justification. Sole-source contracts awarded under the 8(a) Pub. L. No. 111-84, § 811 (2009). table 1, which compares the required elements of CICA and 8(a) justifications for sole-source contracts. While the required elements of 8(a) and CICA justifications differ, both types of justifications are generally required to be published on the federal government’s web site for announcing contract opportunities and the agency website after the contract award is made. In addition, the official who must approve an 8(a) justification for a contract over $20 million would be the same official who must approve a CICA justification of the same amount. This official is determined by the estimated total dollar value of the proposed contract, as outlined in the FAR. The head of the procuring activity or the agency’s senior procurement executive generally approves 8(a) justifications. Figure 1 shows the competition thresholds and current sole-source justification requirements under the 8(a) program. Prior to awarding an 8(a) contract, whether sole-source or competitive, agencies are required to submit an offer letter to SBA identifying the requirement—that is, what goods or services are being procured—as well as any procurement history for the requirement, the estimated dollar amount, and the name of the particular 8(a) firm if intending to award the contract on a sole-source basis. A business opportunity specialist within an 8(a) program district office is to respond with a letter stating whether SBA has accepted the procurement into the 8(a) program after confirming the firm’s eligibility to receive the contract and considering factors that could prohibit SBA’s acceptance of the procurement. SBA assesses a firm’s eligibility based on a number of criteria, including the firm’s size and whether the procurement is consistent with the firm’s business plan. Under the new 8(a) justification requirement, SBA may not accept a sole- source contract over $20 million for negotiation under the 8(a) program unless the procuring agency has completed an 8(a) justification in accordance with the FAR. Partnership agreements between the procuring agencies and SBA outline the responsibilities of both parties in the 8(a) contracting process. These agreements generally delegate SBA’s contract execution function to the agencies after SBA has completed initial acceptance of the procurement into the program. The FAR Council oversees development and maintenance of the FAR. Its membership consists of the OFPP Administrator for Federal Procurement Policy, the Secretary of Defense, the Administrator of the National Aeronautics and Space Administration, and the Administrator of the General Services Administration. The FAR Council issues rules to implement changes to the FAR that are mandated by law. Typically, the first step is a proposed rule, which presents the proposed text in the Federal Register and seeks written comments. In some cases, interim rules are used to implement immediate changes to the FAR and include the text of the revision. Proposed and interim rules can be amended by final rules, which make changes to the FAR after consideration of public comments. The FAR Council did not implement the new 8(a) justification requirement in the FAR by the mandatory deadline set in law. Section 811 of the NDAA for Fiscal Year 2010 required that the FAR be amended within 180 days of the statute’s enactment date to require justifications for 8(a) sole- source contracts over $20 million. Instead, 504 days elapsed between the enactment of the law on October 28, 2009, and the FAR change to implement it on March 16, 2011. In August 2010, almost 1 year after enactment of section 811, the FAR Council issued a notice announcing plans to hold three tribal consultation meetings to obtain comments on implementation of this section from the tribal communities. The council held public meetings during October 2010 in Washington, D.C.; Albuquerque, New Mexico; and Fairbanks, Alaska. After receiving comments, the FAR Council published the rule addressing the 8(a) justification requirements as an interim rule, rather than proposed, because the statutory date for issuance of regulations had already passed. OFPP officials who were involved in the implementation of this rule explained that the primary reason for the FAR Council’s delay was establishing a process for, and holding, tribal consultations. According to the OFPP officials, the FAR Council did not have previous experience conducting such consultations, and developing a process for this delayed the announcement of the meetings. Figure 2 shows key dates in the enactment and implementation of this provision. In its announcement of the planned tribal consultation meetings, the Council cited an executive order that directs certain executive federal agencies to consult with Indian tribes on policies that have tribal implications.order are a critical component of a sound and productive federal-tribal relationship. The Council noted that the consultations provided for in the Section 811 of the NDAA for Fiscal Year 2010 did not require agencies to implement the new justification requirement until it was implemented in the FAR through an interim or final rule, and contracting and policy officials from the agencies involved in our review confirmed that they waited for the FAR revision. Almost 325 days elapsed between the 180- day mandatory deadline after enactment (April 26, 2010) and FAR implementation on March 16, 2011. During this period, according to FPDS-NG data, agencies awarded 42 sole-source 8(a) contracts with anticipated values over $20 million—with a total value of over $2.3 billion—that would have been subject to the new justification requirement if the FAR Council had implemented the change by the statutory deadline. Figure 3 illustrates the number of such contracts awarded per fiscal year quarter in the last 4 years and key dates in the implementation of the new justification requirement. According to FPDS-NG data, 72 contracts had a reported value of more than $20 million in the period from the October 28, 2009, enactment of the statute requiring the 8(a) justification requirement through March 31, 2012. (See appendix II for the number and value of contracts by agency.) However, we found inaccuracies in the data on reported contract value. To understand the trends in award of 8(a) sole-source contracts with reported values greater than $20 million, we also analyzed FPDS-NG data from fiscal year 2008 through the last full year of data available, fiscal year 2011. Compared to fiscal years 2008 through 2010, the number and value of these contracts declined significantly in fiscal year 2011, when only 20 were awarded, as shown in figure 4. Although we found the FPDS-NG data on total contract value overall to be sufficiently reliable to use for our analysis, we found several cases where the Base and All Options data element had been inaccurately reported by the agencies as being much lower than the actual value of the contract. For instance, the Army had awarded a contract worth about $84 million according to contract documents, but its reported value in FPDS-NG was only $24 million. This data element is intended to reflect the total contract value at the time of award, including all options. For indefinite delivery indefinite quantity (IDIQ) contracts, the FPDS-NG data dictionary stipulates that this element is the estimated value for all orders expected to be placed against the contract.field in FPDS-NG for all awards, we found five awarded since October 28, Although this is a required 2009, that implausibly listed a total value of zero. For example, two related Army contracts were both listed as having a value of zero, but when we reviewed the contract files, we found that their total anticipated value was actually $350 million. GSA officials who are responsible for managing the FPDS-NG data system told us that there should not be any instances in which a contract award would have a value of zero. The errors in this data element make it difficult to accurately determine the extent to which agencies are awarding sole source 8(a) contracts valued over $20 million. From March 16, 2011, through March 31, 2012, 14 sole-source 8(a) contracts worth over $20 million were awarded by five agencies. Only three of those contracts—two awarded by the Air Force and the other by the State Department—included 8(a) justifications. The agencies awarding the remaining 11 contracts did not comply with the new justification requirement, either because they were not aware of the requirement and did not prepare a justification, or because they were confused and incorrectly used a CICA justification, as summarized in Figure 5. Contracting officials are required to ensure that all requirements of law and regulation are met before awarding any contract, and as a result, they should keep abreast of changes to the FAR. Yet, for five of the 11 contracts, contracting officials did not comply with the new justification requirement because they were not aware of it. A GSA regional office awarded a sole-source contract for support services to an 8(a) firm in October 2011, with an anticipated value of $40 million. No justification was completed. According to GSA officials, the contracting officer was unaware of the justification requirement at the time of award. As a result of our inquiry, GSA officials stated that they will not exercise options on the contract and are planning to award a replacement contract through an 8(a) competitive process. The regional office also plans to issue guidance to acquisition staff regarding the justification requirement. The Naval Sea Systems Command awarded a contract for information technology services worth about $40.5 million, but did not prepare an 8(a) justification. According to Command contracting officials, they were unaware of the requirement at the time the contract was awarded in July 2011. The Command issued guidance in December 2011 requiring that justifications be prepared not only for 8(a) sole- source contracts above the $20 million threshold, but also for any such contracts above the 8(a) competition threshold of $4 million (or $6.5 million for manufacturing contracts). The contracting officials said that they have begun planning to award the successor contract through a competition among 8(a) firms. Officials at a U.S. Army Corps of Engineers contracting office were aware of increased scrutiny of 8(a) sole-source contracts, but were not aware of the justification requirement itself. They had received a January 2011 memorandum from Army acquisition executives noting the forthcoming justification requirement and calling for contracting officials to limit the use of 8(a) sole-source contracts over $20 million. As a result, when awarding a $35 million 8(a) sole-source contract award for museum relocation services in May 2011, Army Corps contracting officials prepared a memorandum explaining the decision to exceed the $20 million threshold, but it did not meet the requirements of an 8(a) justification. The Army awarded two sole-source IDIQ contracts for engineering and technical support services in June 2011, each of which had a value over $20 million, but did not prepare 8(a) justifications for either contract, as required. These contracts were awarded through a single solicitation to two different firms, with a total value of $350 million. Contracting officials stated that they were not aware of the new justification requirement. Furthermore, we found that these two Army contracts were awarded improperly because SBA had not reviewed the eligibility of the firms and the procurement for the 8(a) program. The contract file documentation states that the contracts were 8(a) sole-source, yet the agency did not send an offer letter to SBA. The contracting officer had contacted an SBA official outside of the 8(a) program, thinking that this was the proper way to offer the procurement into the 8(a) program. But without an offer letter and subsequent SBA acceptance into the program, there was no way to ensure that the firm was eligible to receive the award or that the procurement was properly accepted into the program. We brought this issue to the attention of SBA headquarters officials, who expressed concern and stated they would look into it. Even in cases where contracting officials were aware of the new 8(a) justification requirement, they did not always correctly implement it, due to confusion about what the FAR requires. For example, we found four cases where officials, having determined that their contracts were subject to the new justification requirement, prepared CICA justifications rather than 8(a) justifications. According to the contracting officer for one such contract at the State Department, the preparation of the CICA justification was a result of the rush of end-of-fiscal-year work and the fact that 8(a) justifications were a new requirement they had not dealt with previously. Likewise, a contracting officer at the Army Contracting Command, realizing that 8(a) sole-source contracts now require a justification, prepared a CICA justification instead of an 8(a) justification. The command’s competition advocate, who reviews justifications for sole- source contracts, initially advised the contracting officer that a justification was not required. According to the contracting officer, he learned shortly before contract award that a justification was in fact required, but he was not aware that the elements required in an 8(a) justification were different from those in a CICA justification. In one case at the Drug Enforcement Administration (DEA), officials were aware of the justification requirement but decided not to complete one because their acquisition process began before the FAR was amended. SBA had accepted the procurement into the 8(a) program in January 2011, before the 8(a) justification requirement was implemented in the FAR. However, the $448 million contract, for administrative support services, was awarded on June 14, 2011. A justification was required because the contract was awarded after the FAR implementation date. A memorandum in the contract file dated May 15, 2011, explained DEA’s rationale for not preparing a justification, stating that it would not be constructive to revisit the solicitation process in order to prepare a justification because the negotiations with the firm were nearing conclusion. For one Department of the Interior contract, officials were unsure whether the 8(a) justification requirement applied—in part because of ambiguities in the regulations regarding whether 8(a) justifications should be prepared when class justifications already exist—and thus did not prepare one. A class justification generally covers multiple contracts within a program or sets of programs. This contract was awarded by Interior on behalf of a DOD program office that had a class CICA justification in place, which permitted the award of sole-source contracts to support the program’s work. Contracting officials for this contract were unsure whether the class justification would preclude the need for a separate 8(a) justification for this sole-source contract award. The FAR only states that contracting officers must ensure that each contract action taken under the authority of the class justification is within its scope; it does not address whether a separate 8(a) justification would be required in this situation. This contract illustrates another source of confusion—how to proceed when the anticipated value of a contract changes during negotiation, which happens between SBA’s acceptance of the procurement and contract award. The FAR requires an 8(a) justification at two points: before SBA can accept the contract for negotiation under the 8(a) at time of contract award. The potential for confusion arises because a contract’s value can change during the negotiation process, and the FAR does not address scenarios in which anticipated contract values rise above or fall below the $20 million threshold between SBA’s acceptance of the procurement for negotiation and the award of the contract. For the contract awarded by the Department of the Interior, at the time SBA accepted the procurement, the anticipated value was slightly under the $20 million threshold. However, by the time the contract was awarded, estimated costs had increased to $21.4 million. We also reviewed a DOD contract that illustrates the opposite situation, but which was not required to have an 8(a) justification because the offer letter was sent before the requirement was implemented in the FAR. At the time the procurement was accepted by SBA under the 8(a) program, its anticipated value was about $30 million. The estimated value of the contract dropped to $18.3 million by the time of award. The FAR also does not address whether the new 8(a) justification is needed when out-of-scope modifications are made on existing 8(a) sole- source contracts. Generally, agencies may not modify contracts to add products or services not anticipated in the original scope without a separate sole-source justification. In some cases, however, agencies have determined that the flexibilities of 8(a) sole-source contracts awarded to firms owned by ANCs or Indian tribes allowed them to make such modifications without preparing a justification. For example, in our 2006 report on 8(a) contracting, we found that the Department of Energy had added a number of new types of work to a contract, nearly tripling the value, and the contracting officer cited the flexibilities of the 8(a) sole- source contract awarded to an ANC-owned firm as the reason he was able to do so. We did not identify any such modifications in our present review; however, some contracting officials told us that it was not clear to them if a justification would be required for modifications to 8(a) sole- source contracts. DEA contracting officials cited the ability to make out- of-scope modifications as one of the attractive features of awarding 8(a) sole-source contracts to firms owned by ANCs or Indian tribes, but said they would require a justification for any modification of $20 million or more. GAO-06-399. $20 million each. Officials stated that they were not aware of the new 8(a) justification requirement at the time they awarded these contracts. These awards were not subject to the 8(a) justification, as it only applies to contracts over $20 million. SBA does not have a process in place to confirm that 8(a) justifications are present. The FAR states that the procuring agency must have completed a justification before SBA can accept for negotiation an 8(a) sole-source contract over $20 million, but it does not specify what steps SBA should take to confirm the presence of an 8(a) justification. We found that in most cases, SBA did not discuss the new justification requirements in its correspondence to agencies. During our review, we found a case where an agency had improperly awarded an 8(a) contract, a situation that was not detected by the SBA district official who reviewed the sole-source justification. Army contracting officials told us that an SBA district office business opportunity specialist followed up after receiving an 8(a) offer letter from the Army, to request a sole-source justification. The Army provided SBA with a justification—although it was again a CICA justification, as opposed to an 8(a) justification—and the SBA official noted that the justification requirement had been met. However, the SBA official did not recognize and respond to information showing that the contract was to be awarded to a sister subsidiary owned by the same tribal entity as the incumbent firm—a practice prohibited by SBA’s 8(a) regulations.when offering this procurement to the 8(a) program, the Army stated that there was no acquisition history, yet the justification clearly stated that the incumbent and proposed 8(a) firms were owned by the same tribal entity. Hence, this contract was improperly awarded to the sister subsidiary. When we informed SBA headquarters officials of this situation, they Specifically, expressed concern and indicated they would follow up with the business opportunity specialist. To highlight the 8(a) sole source justification requirement, SBA has revised its partnership agreements to reflect that the procuring agency is responsible for completing the justification. However, SBA’s district officials also have an important role to play in ensuring that the justifications are properly prepared. SBA officials said they were not sure why the district officials did not confirm the presence of justifications in most of the cases we reviewed, noting that the FAR change is relatively recent and that it may take time for all staff to learn of the requirement. The officials added that they are revising their operating procedures and training curricula to reflect the 8(a) justification requirement. These actions, when implemented, will be useful in highlighting the justification requirement for SBA district officials. However, SBA has yet to convey to its district officials the practical means of how to go about ensuring that the procuring agencies have completed the justification. Agencies have generally not complied with the justification requirement for 8(a) sole-source contracts. This slow start may be due in part to the relatively recent implementation of the requirement; however, we also found a lack of awareness and confusion among contracting officials and SBA district officials. In some situations the FAR is not clear whether a justification is required. This includes cases where there is a class justification already in place, when the value of a contract rises above or falls below $20 million during the negotiation process, or when out-of- scope modifications are made to 8(a) sole-source contracts. Clarifying guidance is needed to help ensure that agencies are applying the justification requirement consistently. While agencies are required to prepare justifications in accordance with the FAR, SBA is required, in practice, to confirm that these justifications are in place. SBA does not currently have a process in place to do so. Finally, because of shortcomings in the data agencies are entering into FPDS-NG regarding the total value of contracts at the time of award, agencies lack the information that would allow them to monitor how many sole-source 8(a) contracts are awarded over the $20 million threshold. To help mitigate future confusion regarding justifications for 8(a) sole- source contracts over $20 million, we recommend that the Administrator of the Office of Federal Procurement Policy, in consultation with the FAR Council, promulgate guidance to: Clarify whether an 8(a) justification is required for 8(a) contracts that are subject to a pre-existing CICA class justification. Provide additional information on actions contracting officers should take to comply with the justification requirement when the contract value rises above or falls below $20 million between SBA’s acceptance of the contract for negotiation under the 8(a) program and the contract award. Clarify whether and under what circumstances a separate sole-source justification is necessary for out-of-scope modifications to 8(a) sole- source contracts. To help ensure that Small Business Administration officials meet FAR requirements for sole source contracts over $20 million, we recommend that the Administrator of the Small Business Administration take the following two actions when revising operating procedures and training curricula: Include instructions to business opportunity specialists on the steps they are to take to confirm whether agencies have met the justification requirement, such as obtaining a copy of the justification from the agency. Include instructions to confirm that procuring agencies have prepared an 8(a) justification rather than a CICA justification. To help ensure that federal procurement data provides accurate and complete information, we recommend that the Administrator of the General Services Administration implement controls in FPDS-NG to preclude agency officials from entering a value of zero dollars for the Base and All Options data element when the initial award of a contract is entered into the database. We provided a draft of this report to SBA, OFPP, GSA, and the departments of Defense, the Interior, Justice, and State. We received written comments from SBA, which are reproduced in appendix III. SBA did not fully address our recommendations. In email responses, OFPP and GSA generally agreed with our recommendations, and OFPP also included additional comments. DOD did not respond. The other agencies responded with no comment. In its written response, SBA stated that the burden is on the procuring agencies to prepare the appropriate sole-source justification and that SBA would take actions to ensure that the agencies do so. For example, SBA plans to modify its partnership agreements to incorporate a requirement that the contracting officer certify that the justification has been completed. While these actions may help increase awareness of the justification requirement at the procuring agencies, they do not address SBA’s own responsibilities. As we discuss in the report, the FAR states that SBA may not accept for negotiation sole source 8(a) contracts over $20 million unless the appropriate justification has been completed. SBA states that it is difficult to interpret the FAR as requiring SBA to verify the existence of the justification. We disagree. Logically, to meet the FAR requirement, SBA must confirm the existence of an 8(a) justification. Our recommendations were intended to help SBA’s business opportunity specialists understand how to comply with the FAR requirement. In an email response, OFPP generally agreed with our recommendations and asked that we reflect that the Administrator of OFPP should take the recommended actions in consultation with the FAR Council. We agreed and made that change. OFPP further noted that, when planning the tribal consultations to implement the 8(a) justification requirement, the FAR Council also considered the President’s Memorandum of November 5, 2009, which underscores the Administration’s commitment to regular and meaningful consultation with tribal officials in policy decisions that have tribal implications. We are sending copies of this report to the Secretaries of Defense, the Interior, and State; the Attorney General; the Administrators of the Small Business Administration, the General Services Administration, and the Office of Federal Procurement Policy; and interested congressional committees. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objectives of this review were to determine (1) the timeliness of actions taken to implement the 8(a) justification requirement in the Federal Acquisition Regulation (FAR); (2) the number of sole source 8(a) contracts over $20 million that have been awarded since October 2009 and trends over time; and (3) the extent to which agencies have implemented the new justification requirement. To assess the timeliness of the actions taken to incorporate the new justification requirement into the FAR, we reviewed the relevant interim and final rules published in the Federal Register. We also interviewed officials from the Office of Federal Procurement Policy (OFPP), as the Administrator of OFPP serves as chair of the Federal Acquisition Regulatory Council, which implements changes to the FAR. Additionally, to confirm agency officials’ statements to us that they did not include justifications in 8(a) sole-source contracts awarded after the October 29, 2009, enactment of the law but before its March 16, 2011, implementation in the FAR, we selected a judgmental sample of five such contracts. We selected those with the highest reported values in the Federal Procurement Data System-Next Generation (FPDS-NG) at agencies already within the scope of our review, and verified the absence of justifications with agency contracting officials. As stated in the report, Section 811 of the NDAA for Fiscal Year 2010 did not require agencies to implement the new justification requirement until it was implemented in the FAR. To determine the number of 8(a) sole-source contracts over $20 million awarded in the last several years, we analyzed contract data from FPDS- NG for contracts awarded from October 1, 2007, through March 31, 2012. We took several measures to assess the reliability of this FPDS-NG data: We selected nine additional contracts to review for data reliability purposes. Among the 13 contracts identified in FPDS-NG as having values between $19.5 million and $20 million, we selected a judgmental sample of seven to review, including four contracts awarded by one Army Corps of Engineers contracting office worth exactly $20 million each. For these contracts, we reviewed information in the contract files to determine the anticipated total value of the contract at the time of award, and confirmed that all were equal to or under $20 million and thus not subject to 8(a) justification requirements. In addition, we conducted a statistical analysis of 8(a) sole-source contracts with a total value of less than $19.5 million, identifying contracts with high levels of correlation with characteristics of high- value 8(a) sole-source contracts, such as contract type and the type of service provided. Based on this analysis, we selected two additional contracts at entities already included in our review and reviewed relevant contract files to verify their value, and confirmed that both were under the $20 million threshold. We also calculated total obligations as of March 31, 2012, on the contracts in this data set as a further check against inaccuracies in the Base and All Options data element in FPDS-NG. Finally, we checked the data reported in FPDS-NG against information gathered in reviews of contract files for 14 contracts over the $20 million threshold awarded after March 16, 2011, as discussed below. We determined that the data for this period was sufficiently reliable to identify contracts that were subject to the 8(a) justification requirements and describe their characteristics. To determine the extent to which agencies have implemented the new justification requirement, we identified and reviewed all 14 relevant contracts that were awarded between the FAR implementation date of March 16, 2011, and March 31, 2012. We took the following steps to identify these contracts: Most of the relevant contracts were identified using the Base and All Options data element in FPDS-NG. We initially identified 14 sole- source 8(a) contracts with values over $20 million. During reviews of the contract files, we determined that 3 of the 14 contracts identified in our FPDS-NG analysis did not meet criteria for the justification requirement and eliminated them from our review. One Army contract was eliminated because its reported value of $99 billion was erroneous, and its actual value was below $20 million. The Army has taken steps to correct this information. We found that another Army contract was not a new award, but rather an administrative action taken for accounting purposes; the underlying contract was awarded prior to implementation of the justification requirement. We also eliminated an Office of Personnel Management contract that was awarded competitively, despite being reported in FPDS-NG as 8(a) sole-source. To compensate for any errors in the Base and All Options data element, we also calculated cumulative obligations for all 8(a) sole- source contracts awarded during the same period. Based on this analysis, we identified one additional DOD contract, awarded by the Army. A review of the contract file confirmed that its value was over $20 million. Finally, in the course of our review, we identified two additional contracts through other means. One contract was identified by State Department officials when we inquired about 8(a) sole-source contracts over $20 million. The other, an Army contract, was identified through references to it in a related contract file. Of the 14 contracts that we identified as meeting the criteria for the justification requirement, 8 were awarded by DOD and the rest by the General Services Administration and the Departments of the Interior, Justice, and State. We reviewed these contract files to determine if justification documents were present and assess whether the justifications complied with FAR requirements. We also reviewed other contract documents, including Small Business Administration (SBA) coordination records, acquisition plans, price negotiation memorandums, and award memorandums. We reviewed policy documents related to implementation of the justification requirement. We also interviewed contracting and policy officials at the relevant organizations regarding acquisition histories of the contracts and policies and practices related to the justification requirement. In addition, we also reviewed a contract awarded by DOD’s Washington Headquarters Service that was not subject to the justification requirement. It was identified for review because it had obligations of more than $20 million. A review of the contract file revealed that the contract was valued below $20 million at the time of award, thus it was not included among the 14 contracts discussed above. The organizations with contracts in our review, including those reviewed for data reliability purposes, were as follows: Naval Surface Warfare Center, Dahlgren, Virginia Peterson Air Force Base, Colorado Redstone Arsenal Army Base, Alabama Joint Base Elmendorf-Richardson, Alaska Robins Air Force Base, Georgia Space and Naval Warfare Systems Command, Systems Center Pacific, San Diego, California U.S. Army Corps of Engineers Norfolk District U.S. Army Corps of Engineers Sacramento District U.S. Army Corps of Engineers Tulsa District U.S. Army Contracting Command, Natick, Massachusetts Washington Headquarters Service, Washington, D.C. General Services Administration, Federal Acquisition Service Region 8, Denver, Colorado Department of the Interior, Acquisition Services Directorate, Reston, Department of Justice, Drug Enforcement Administration, Arlington, Department of State, Office of Acquisition Management, Arlington, Additionally, we interviewed SBA officials regarding their interpretation of the FAR rule implementing the 8(a) justification requirements and measures the agency has taken or plans to take to comply with this change. We also reviewed SBA 8(a) program regulations. We conducted this performance audit from April 2012 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Air Force Army Navy Other DOD This table summarizes the number of contracts and reported value awarded by agency between October 28, 2009—the date of enactment for the National Defense Authorization Act for Fiscal Year 2010—and March 31, 2012, the date of the most current data available at the time of our review. In addition to the person named above, Tatiana Winger, Assistant Director; Pamela Davidson; Danielle Green; Georgeann Higgins; Julia Kennon; Teague Lyons; Kenneth Patton; Dae Park; Jungjin Park; Sylvia Schatz; and Roxanna Sun made key contributions to this report. | SBA's 8(a) program is the government's primary means of developing small businesses owned by socially and economically disadvantaged individuals, including firms owned by Alaska Native Corporations and Indian tribes. The NDAA for Fiscal Year 2010, enacted on October 28, 2009, called for revisions to the FAR to provide for a written justification for sole-source 8(a) contracts over $20 million, where previously justifications were not required. GAO determined (1) the timeliness with which this new justification requirement was incorporated in the FAR; (2) the number of 8(a) sole-source contracts valued over $20 million that have been awarded since October 2009 and trends over time; and (3) the extent to which agencies have implemented this new justification requirement. GAO analyzed federal procurement data, reviewed the 14 contracts subject to the requirement across five federal agencies, and interviewed officials from OFPP, SBA, the Department of Defense, and other agencies. The National Defense Authorization Act (NDAA) for Fiscal Year 2010 required that the Federal Acquisition Regulation (FAR) be amended within 180 days after enactment to require justifications for 8(a) sole-source contracts over $20 million. These justifications bring more attention to large 8(a) sole source contracts. The FAR Council, which updates the FAR, missed this mandatory deadline by almost 325 days. During this delay, based on data in the Federal Procurement Data System-Next Generation (FPDS-NG), 42 sole-source 8(a) contracts with reported values over $20 million, totaling over $2.3 billion, were awarded without being subject to a justification. Office of Federal Procurement Policy (OFPP) representatives involved with the FAR Council's implementation of this rule attributed the delay primarily to the time required to establish a process for consulting with Indian Tribes and Alaska Native Corporations. From October 28, 2009, through March 31, 2012, agencies reported awarding 72 sole-source 8(a) contracts over $20 million. GAO also analyzed trend information in FPDS-NG from fiscal year 2008 through fiscal year 2011 (the most current available information), which showed that the number and value of these contracts declined significantly in 2011. While GAO determined that FPDS-NG data was sufficiently reliable for the purposes of this review, GAO found errors, such as contracts with an implausible reported value of zero. GAO found a slow start to implementation of the new justification requirement. Of the 14 sole-source 8(a) contracts awarded since the FAR was revised, only three included an 8(a) justification. The agencies awarding the remaining 11 contracts did not comply, either because contracting officials were not aware of the justification requirement or because they were confused about what the FAR required. For example, contracting officials were confused in one instance where another justification was already in place that covered multiple contracts. Further, the Small Business Administration (SBA) cannot accept a contract over $20 million for negotiation under the 8(a) program unless the procuring agency has completed a justification, but GAO found that SBA did not have a process in place to confirm the presence of a justification. GAO recommends that OFPP issue guidance to clarify the circumstances in which an 8(a) justification is required. GAO also recommends that the General Services Administration--which operates FPDS-NG--implement controls in FPDS-NG to help ensure that contract values are accurately recorded, and that SBA take steps to ensure that its staff confirm the presence of justifications. OFPP and GSA generally agreed with the recommendations. SBA indicated it would take some actions but did not fully address the recommendations. |
At the local level, VHA’s delivery system is organized into 18 VISNs, each responsible for overseeing VAMCs within a defined geographic area. VISN directors report to the Deputy Under Secretary for Health (USH) for Operations and Management who oversees VHA’s field operations. The Deputy USH for Operations and Management also serves as the focal point between VHA’s central office and the VISNs and VAMCs. Within VHA’s central office, policy management roles are divided between multiple offices. VHA’s central office is responsible for national policies developed by individual program offices—approximately 145 program offices as of May 2017. These program offices may have clinical or administrative functions and vary in the number of policies that they develop and manage. To help standardize national policy processes and reduce the burden on program office subject matter experts, VHA’s Office of Regulatory and Administrative Affairs (ORAA) manages the national policy development and review process. As of June 2017, ORAA had about four full-time- equivalent staff assigned to national policy management. These staff are primarily responsible for shepherding documents through the policy review process, providing policy-writing expertise, and working with relevant VHA subject matter experts within individual program offices to develop or update policies. Through its policy-development process, ORAA aims to reduce variability, simplify the process, and ensure any issues are identified and vetted prior to final approval. ORAA advises responsible program offices about their policies, but does not have the authority to require their compliance to complete policy-related tasks. ORAA also tracks and reports policy, procedural, and timeliness requirements, and is responsible for ongoing process improvement. It collaborates with the Office of Policy and Services on policy management activities and with the Office of Organizational Excellence on high-risk areas of concern. See figure 1 for VHA’s key leadership positions related to policy management. VHA Directive 6330 governs the organization’s policy management; the June 2016 revisions established clearer definitions for national policy and guidance documents. It also updated VHA’s policy drafting and submission processes, as well as its requirements for policy issuance and recertification. Specifically, the revised directive defines national policy as a document that “establishes a definite course of action for VHA and assigns responsibilities for executing that course to identifiable individuals or groups.” The directive stipulates that two primary document types are to be used for national policy—directives and notices: Directives are to be used to establish national policy and contain certain types of information, such as the roles and responsibilities for each component of the organization. Notices are to be used to communicate information about a one-time event (e.g., rescinding a current national policy) or to establish interim policy until a directive can be developed. The directive also states that a memo signed by the USH can be used to establish policy for VHA’s central office, but not for VISNs and VAMCs. Additionally, VHA Directive 6330 states that guidance is not national policy and defines guidance as “recommendations that inform strong practices within the organization and are supported by evidence, legal requirements, national policy, or organizational priorities.” It states that guidance includes recommendations for implementing statutes, regulations, or national policy. Guidance documents include program office memos, standard operating procedures, and other such documents that are not signed by the USH. VHA Directive 6330 establishes a 5-year recertification date for directives, while notices have an automatic 1-year expiration period. VHA has not established recertification time frames for guidance documents. We found that VHA is in the process of reviewing existing national policy documents to align with its new policy definitions as outlined in Directive 6330. It began reviewing documents in October 2015 in response to our high-risk concerns related to policy management, and this effort has evolved over time. Specifically, ORAA initiated a process of reviewing 788 documents previously issued as national policy—directives, handbooks, manuals, and information letters—the majority of which were outdated. However, existing guidance documents, such as program office memos, have not been included in this review because there is no central repository that would facilitate their identification, and the number of these documents is unknown. In addition, ORAA officials told us that they do not have enough staff to review these additional documents. (See figure 2.) Through its review process, ORAA intends to streamline the number and types of policy documents used by the organization. (See table 1.) ORAA is eliminating handbooks, manuals, and information letters, although they will continue to function as national policy until rescinded. As part of this effort, ORAA plans to move any relevant content to other policy and guidance documents as appropriate. For example, ORAA is incorporating handbooks into their related directives. VHA noted that this consolidation should help reduce any redundancy and inconsistency when multiple documents articulate different aspects of a single policy. This will also help VHA ensure that when national policy is updated, the update will also include a review of relevant information currently found in other policy documents. Officials from many of the VISNs and VAMCs in our review agreed that a single document source for policy information would be helpful. ORAA officials noted that the definitions for what constitutes national policy and guidance documents are still evolving. According to our review of ORAA information, almost 60 percent of the 788 policy documents identified for transition under the new definitions were outdated in October 2015. As part of its transition, ORAA is taking outdated documents and either rescinding them or recertifying the ones that are still relevant. VHA’s recertification process involves assessing whether a national policy document still serves a purpose and should be updated accordingly, or is no longer needed and should be rescinded or combined with another policy. Officials from most VISNs and VAMCs in our review told us that unless a policy has been rescinded, they continue to follow it, even if past its recertification date. This practice is consistent with requirements in the revised VHA Directive 6330 and a memo signed by the USH in June 2016. ORAA officials expect that transitioning VHA’s existing policy documents will take about 5 years. As of June 2017, ORAA reduced the total number of documents identified for transition by 193 (from 788 to 595), and 43 percent of these remaining documents (256 of 595) are outdated. (See figure 3.) Much of the reduction has been driven by rescinding manuals and information letters. The number of directives and handbooks has not changed substantially; this is due, in part, to the continuation of policies reaching their recertification dates and the publication of new or changed policies. ORAA officials said that its limited progress is also due to resource constraints, such as insufficient staffing, funding, and inadequate information technology capability. Because ORAA does not own the policies, officials noted that they must rely on the responsible program offices to comply with policy-related tasks. Contrary to the new national policy definitions in VHA Directive 6330, program offices continue to issue policy using memos—an issue we also noted in our high-risk update in 2017. Officials from every program office in our review told us that they have continued to use memos to issue policy quickly. ORAA officials stated this may be due to the lengthy national policy review process, which they said took an average of 317 days in fiscal year 2016. Program offices use memos for a variety of purposes, including clarifications or updates to issued directives, data collection requests, information about policies or procedures while a directive is under development, and the provision of training information. Memos signed by the Deputy USH for Operations and Management— referred to as “10N” memos—are the most common type of program office memo we identified in our review. Historically, VHA has primarily used 10N memos to communicate with VISNs and VAMCs because the Deputy USH for Operations and Management oversees local operations. ORAA officials stated that program office memos were never intended to serve as national policy. Specifically, VHA Directive 6330 states that a notice should be used to establish interim policy until a directive can be developed. However, VHA has mostly used notices to issue rescissions of previous policy documents, and memos continue to be used to establish policy. For example, we identified a 10N memo that instructed VAMCs to immediately implement changes to ongoing professional practice evaluations and peer review requirements for VAMC chiefs of staff. In another instance, officials at one VAMC noted that, at the time of our visit, they had already received 32 changes to the Veterans Choice Program since 2014 through non-policy documents, including memos. Using program office memos—instead of the appropriate policy vehicle— to issue policy is problematic in light of VHA’s new policy and guidance document definitions. Additionally, unlike national policy, program office memos are not internally vetted and are not subject to recertification, as described below. Lack of internal vetting. Memos are not subject to a formal review process and can be issued quickly once signed. VHA noted in its high-risk action plan that 10N memos are the predominant source of guidance documents, and have been used to create policy without being vetted by other agency offices or VA’s labor management relations group. Without such a vetting process, VHA leadership and other officials in the organization do not always have input on or even awareness of the potential impact of policy issued through these memos. Further, some of the VISNs and VAMCs in our review cited concerns about contradictory information among related program office memos. Not subject to recertification. Unlike national policy documents, memos are not subject to recertification and are therefore typically not rescinded. Officials from some of the VISNs and VAMCs in our review described challenges when outdated memos are not rescinded, including questions about whether the memo should still be applied to local practices. For example, one VAMC wanted to use a certain non-VA care option for radiation oncology services, but a memo that was over a year old instructed local facilities to use a different non-VA care option. Since that memo had not been rescinded, VAMC officials said that they could not use their preferred non-VA care option to avoid delays in care. ORAA is taking steps to address program offices’ use of memos to issue policy, but it is only focusing on 10N memos at this time. ORAA officials told us they have agreed with the Deputy USH for Operations and Management to have a 5-year recertification date for 10N memos, although this has yet to go into effect. They are also reviewing and assessing guidance documents submitted by program offices to see whether the content should be in a different document type, such as a directive. However, ORAA officials noted that they are not sure whether they have sufficient staff to sustain these reviews. Without further steps to clarify how and when program office memos should be used, the continued use of these memos by program offices may undermine VHA’s efforts to implement new policy and guidance definitions, as intended. Furthermore, VHA cannot ensure that VISNs and VAMCs have a clear understanding of which policies to follow. As a part of its updated Directive 6330 on national policy management, VHA established a standard process to make national policy documents accessible to VISNs and VAMCs. Specifically, ORAA’s Publications Control Officer is responsible for ensuring national policy documents are disseminated to each level of the organization by maintaining the VHA publications website and distribution list, according to VHA Directive 6330. Once a national policy document is finalized, the Publications Control Officer posts it to VHA’s publications website. However, officials from two of the VISNs and most of the VAMCs in our review stated that the documents are difficult to search for on this website because it requires specific wording to locate them. As a result, some officials told us that they often search for national policy documents using other online search engines such as Google. ORAA officials told us that making the VHA publications website more user-friendly is dependent upon their ability to obtain the appropriate technical capability. The Publications Control Officer is also responsible for distributing issued national policy no later than 2 business days after it is signed by the USH. To do so, this individual uses a national e-mail group—the VHA distribution list—as the standard mechanism to distribute the policy document to each component of the organization. The distribution list includes groups of staff from program offices and key VISN and VAMC staff. According to ORAA officials, staff can be added to or removed from the distribution list on an ad hoc basis. Many VISN and VAMC officials in our review were satisfied with the use of the distribution list to disseminate national policy documents. VHA program offices may also provide copies of the documents to VISNs and VAMCs to inform them of forthcoming policy or policy changes. Officials from several VISNs and VAMCs told us that receiving a national policy document from various sources ensures that it is disseminated to the right people at the local level. ORAA officials told us that they recently conducted a survey and learned that it is not always clear which VAMC staff position is responsible for policy implementation. For example, officials from one VAMC in our review were unsure who within their facility was receiving national policy documents from the distribution list. In the future, ORAA officials said that they plan to update the distribution list process and e-mail contacts to ensure that the appropriate VISN and VAMC staff members are receiving the information. ORAA officials also said they plan to continue exploring which staff positions are responsible for managing policy at the local level to determine if there are any gaps that need to be filled. ORAA officials said that their ability to identify and address these gaps is contingent on competing priorities and staffing. Unlike with national policy documents, there is no standard process used to ensure guidance documents issued by various VHA program offices are consistently made accessible to VISNs and VAMCs. As a result, we found that guidance documents can be difficult to find, and there is no assurance that VISNs and VAMCs receive them and are all following the same guidance. Specifically, guidance documents are not part of a central repository, are not tracked, and are not consistently disseminated to VISNs and VAMCs. Lack of a central repository. Guidance documents, such as program office memos, that do not go through the formal VHA review process are not posted on VHA’s publications website and are maintained in different ways by the program offices that develop them. For example, 3 of the 4 VHA program offices in our review told us they maintain memos on various internal websites, while the remaining program office does not maintain copies of its memos once sent to the local level. ORAA officials noted that while a central repository with all VHA guidance would be ideal, they do not have sufficient staff and resources to accomplish this. However, they would like to establish a location on the VHA intranet, where ORAA could post future 10N memos. Officials said they do not have the capacity to identify and add previously issued 10N memos due to staff limitations. Not systematically tracked. In general, guidance documents are not typically assigned tracking numbers and, as a result, are difficult to identify and quantify. For example, as previously mentioned, most VHA program offices in our review said that identifying and quantifying the total number of their memos would be difficult because they do not systematically track them. As a result, program offices do not know whether some of these documents are duplicative or whether they conflict with one another or with other policy documents. At the local level, officials from three VISNs and five VAMCs in our review noted difficulties with finding program office memos. Officials explained that they sometimes rely on staff’s institutional knowledge to find a specific memo, or they may contact the relevant program office. ORAA officials told us they would like to work with the Deputy USH for Operations and Management to assign tracking numbers to 10N memos so that they can be referenced and searched. Inconsistent dissemination. VHA program offices may disseminate guidance to VISN staff for distribution to VAMCs or to both VISN and VAMC staff at the same time. Each program office in our review told us they maintain their own e-mail groups for communication with the local level. However, officials from one VAMC expressed concern that receiving guidance depends on staff being included in a specific program office’s e-mail group. According to standards for internal control in the federal government, management should internally communicate the necessary quality information to achieve the entity’s objectives. In doing so, management selects appropriate methods to communicate internally and considers how that information will be made readily available to its staff when needed. Without a standard process for consistently maintaining and disseminating guidance documents to VISNs and VAMCs, the agency lacks assurance that staff members receive and follow the same guidance, as intended. VHA has not consistently solicited input on national policies either prior to issuance or after implementation from VISNs and VAMCs. Officials from the four VISNs and eight VAMCs in our review outlined a variety of challenges they face when implementing national policy, including insufficient or undefined time frames and conflicting policies on the same topic. Insufficient or undefined time frames. Officials from most of the VISNs and VAMCs in our review told us that it is difficult to implement policies with insufficient or undefined time frames. For example, officials from one VAMC told us that a national policy sometimes does not specify required implementation time frames, and as a result, the expectations for when VAMCs should complete implementation are not clear. VHA officials told us that there is no VHA-wide standard for specifying time frames for completing implementation of national policy. Resource constraints. Officials from most of the VISNs and VAMCs in our review identified resource constraints as an implementation challenge for certain policies, such as those with stringent staffing and building space requirements. For example, officials from one VISN told us that its facilities were required to have a certain type of surgeon available, which proved challenging to recruit and retain for smaller, more rural VAMCs. Additionally, officials from another VAMC said that one national policy required mental health patients to have access to a safe outdoor space, which would be difficult to implement without major construction and at least 5 years to plan. Officials said that to comply with this policy, they plan to have staff walk patients outside. However, this reduces the available staff on the mental health unit during this time. Because local situations may vary, VHA program office officials told us that it is difficult to specify resource needs in national policy that applies across all VISNs and VAMCs. Not specific to VAMC complexity level. Officials from most of the VISNs and VAMCs in our review noted that the lack of tailoring for a facility’s complexity level makes national policy implementation difficult. As a result, officials stated that level 2 (medium complexity) and 3 (low complexity) VAMCs are often expected to adhere to the same policy requirements as level 1 (high complexity) VAMCs. For example, officials noted that policies requiring 24-hour physician coverage for specialties such as emergency medicine, women’s health, and suicide prevention are challenging for complexity level 2 and 3 VAMCs, which may not have sufficient patient volume or staffing resources. Officials from one program office explained that complexity level is not addressed in national policy, but may be addressed in a standard operating procedure or local policy. Officials from VHA’s Office of Organizational Excellence told us that national policies are intended to be written broadly for VAMCs of all complexity levels. However, other VHA officials acknowledged that policies are written for facilities that fully operate a service or program, and have the capability to implement all of its accompanying policy requirements, which are usually level 1 (high complexity) facilities. Conflicting policies on the same topic. A few VISNs and VAMCs in our review noted implementation challenges when more than one program office has responsibility for the same policy area, and they do not collaborate when issuing policies on the same topic. For example, officials from one VAMC told us that they were unsure what humidity levels they should follow for sterile processing services when the national policy from one program office stated that the humidity level must be at 60 percent, which was contradictory to a national policy from another program office that stated humidity levels must be at 55 percent. Officials from some of the VISNs and VAMCs in our review told us that obtaining input on national policy prior to issuance—particularly from those responsible for policy implementation—could help VHA to identify and mitigate many of the challenges that impede local policy implementation. For example, officials from three VISNs and two VAMCs told us that the terminology changes to VHA’s updated scheduling policy issued in July 2016 caused confusion for staff. Additionally, officials from one VISN and one VAMC told us that terminology changes led to different interpretations and variation in implementation across VAMCs, which may have been mitigated through prior feedback discussions. In December 2016, ORAA instituted a new process to obtain comments on draft national policy that includes posting policy documents for a 2- week period on a SharePoint site. All VHA officials, including those in VISNs and VAMCs, have access to the site and are able to comment. In addition, ORAA has plans to develop a pre-policy form that would require program offices to provide information on the policy’s purpose, whether it conflicts with other VHA policy, metrics to measure implementation, identification of any new resources needed, a cost analysis, and a communications plan for VISNs and VAMCs. VHA officials told us that the pre-policy form could be another mechanism to collect information on potential implementation challenges. However, VHA officials have yet to finalize it. Officials from several VISNs and VAMCs in our review said it also would be helpful for VHA to collect feedback from them after policy implementation to identify and address any unanticipated difficulties. Some program offices in our review already collect feedback on their own policies after implementation; however, this is not done systematically. According to standards for internal control in the federal government, management should internally communicate the necessary quality information to achieve the entity’s operational objectives. In doing so, management can obtain relevant information from reliable internal sources. Without a way to systematically obtain local feedback on national policies, VHA may lack the relevant information that would allow it to mitigate potential implementation challenges and resolve any unexpected problems to ensure policies are being implemented as intended. In certain cases, when VAMCs may be unable to comply with all or part of a national policy, program offices may approve policy exemption waivers on an informal and ad hoc basis. However, we found that VHA lacks information on these policy exemption waivers because it has not established a standard process for program offices to use for waiver submissions and approvals and does not centrally track those that have been granted. Furthermore, program offices are not required to reassess approved waivers to determine whether they are still warranted. No standard submission or approval process. VHA does not have an established waiver exemption process that would standardize how program offices manage the submission and approval of waivers. As a result, program offices managed waiver submission and approval on an ad hoc basis, although certain national policies may specify a process for how VAMCs should submit a waiver. If a process for submitting waivers is not specified for a policy, it is up to a VAMC to create and submit one for its facility’s needs. For example, one VAMC had a waiver approved through e-mail and a conference call, and another VAMC had a waiver approved after a site visit. No central tracking. VHA does not centrally track approved policy exemption waivers, and as a result, it does not know how many local facilities are not implementing national policy as intended. Additionally, several program offices in our review did not know how many waivers their offices had approved. No reassessment requirement. There is no VHA requirement for program offices to reassess issued policy exemption waivers to determine whether they are still needed. Officials in some program offices told us that their waivers have an expiration date, and officials from another program office told us that time limits for waivers depends on the policy. Nevertheless, waivers are not routinely reassessed to determine whether they are still needed. In June 2017, VHA’s Office of Organizational Excellence established a committee comprised of subject matter experts and representatives from VHA, VISNs, and VAMCs to standardize the policy exemption waiver process. According to its charter, the committee will assess the challenges local facilities experience when there are issues complying with a national policy and develop a process that will be used to pursue a waiver. Under this process, officials explained that a VAMC would submit a proposal to its VISN, which would then submit it to the VHA waiver committee for approval. According to standards for internal control in the federal government, management should design control activities, such as procedures, to achieve objectives and respond to risks. In doing so, management designs appropriate procedures to help it fulfill responsibilities and address identified risks. Additionally, internal control standards state that management should establish and operate activities to monitor the internal control system and evaluate the results. In doing so, management considers using quality information to evaluate the agency’s performance and make informed decisions. Without processes in place to systematically approve, track, and reassess policy waivers, VHA does not know which facilities are not implementing certain policies, the reasons why they are unable to do so, and whether these reasons continue to be valid. Almost all of the VISNs and VAMCs in our review told us that they had developed their own local policies. Officials from all four VISNs in our review told us they generally try to limit the number of regional policies so as not to overburden their VAMCs. Their regional policies are usually focused on administrative issues (for example, staff telework and records management) and overarching areas of responsibility (for example, sterile processing of medical equipment services and utilization management). The number of policies these four VISNs developed ranged from none to 88. VISNs vary in how often they renew their regional policies. Officials from one VISN told us they renew their policies every 2 to 3 years, while officials from another VISN told us they do so every 5 years. Officials from the eight VAMCs in our review told us that they generally issue facility-wide local policies (for example, policy management and medical appointment scheduling) and service-line- specific standard operating procedures for front line clinical care. These VAMCs generally develop local policies for different reasons, including when more specificity is needed for national policy implementation or to meet Joint Commission requirements. VAMCs might also create a policy for a local circumstance, such as transportation or building issues. The number of local policies for the eight VAMCs ranged from 151 to 561. Officials from all eight VAMCs told us they generally renew their local policies every 3 years due to Joint Commission requirements or as needed. Ad hoc updates to local policies may be due to newly issued national policy. The VISNs and VAMCs in our review maintain local policies on a variety of websites, such as on SharePoint or intranet sites. The majority of VISN and VAMC officials said that they used SharePoint sites as the primary or only place for maintaining local policies. VHA officials generally do not have access to local SharePoint sites unless specifically requested. As a result, VHA officials are not necessarily aware of the number or types of local policies. VHA has not established a process for systematically ensuring that local policies are aligned with national policies, which increases the risk of inconsistent policy implementation across VAMCs—one of the primary reasons that VA health care was placed on our high-risk list. In recent years, we and others have reported various instances of VAMCs’ differences in implementing national policy, most notably with its policy for scheduling medical appointments. More recently, in February 2017, we reported weaknesses in the way VAMCs were implementing their controlled substance inspection programs because local policies at most of the VAMCs in our review did not include all nine VHA program requirements as outlined in the national policy. Officials from each level of the organization told us about ad hoc efforts to assess local policies: Officials from each of the VAMCs in our review generally told us that they assess their local policies to ensure they are consistent with issued national policy. VAMC officials also told us that VISNs and national program offices periodically assess whether specific local policies follow national policies during periodic site visits. Officials from the VISNs in our review noted that their overall monitoring activities are primarily focused on evaluating local compliance with national policy and not on assessing local policy for alignment with national policy. None of the officials from the program offices in our review told us they have a standard process for assessing whether local and national policies are aligned. However, program offices may check the alignment of local policies on a case-by-case basis. For example, officials from a national program office told us about a recent assessment they conducted of local policies for a same-day access initiative to ensure certain national requirements were met. VHA has recently outlined plans for additional oversight in response to our high-risk report that includes assessing whether local policies are aligned with national policies. According to the plan, VHA’s Office of Integrity will conduct risk-based internal audits where senior VHA leadership would set priorities for audit areas (e.g., suicide prevention), and staff would then review local policies in those areas. A VHA official in the Office of Integrity explained that both the national program offices and VISNs may have responsibility for ensuring alignment of local and national policies under VHA’s plans, but there is currently no consensus for designating this responsibility. VHA also plans to include standards, such as internal controls, in every new or revised policy to allow officials to determine whether the policy is being appropriately implemented and meets objectives. However, VHA is still in the early stages of putting its plans in place. According to standards for internal control in the federal government, management should perform ongoing monitoring of its activities to help ensure its objectives are carried out as outlined in policy. In doing so, management can build in continual monitoring into its internal control system through separate, periodic evaluations. Without a standard process to ensure local policy alignment with national policy, VHA may continue to experience inconsistent practices across its health care system. As one of the largest health care delivery systems in the nation, it is important for VHA to ensure that its facilities consistently implement national policies as intended to ensure timely, high-quality care for the nation’s veterans. VHA has taken a number of steps to improve its policy management; however, this is a substantial undertaking, and much work remains that will require a sustained focus to remedy a number of issues. In addition, appropriately allocating the necessary resources will be critical to VHA’s ability to continue making improvements in this area as resource constraints continue to be an overarching impediment. A number of systemic problems have contributed to the inconsistent implementation of national policy at the local level. Most notably, despite its newly revised directive on policy management, VHA’s program offices continue to issue policy through mechanisms such as memos that are not defined as policy vehicles. Policy issued in such a manner can also be contradictory or outdated because it is not subject to a formal review process or periodically recertified. Additionally, VISNs and VAMCs may not be receiving or following the same memos and other issued guidance because VHA lacks a standard dissemination process or central repository for these documents. Furthermore, VHA does not have the ability to identify concerns associated with local implementation of national policies because it does not systematically collect information about challenges, before or after implementation. It also has not established a standard process for issuing and managing policy exemption waivers that may be granted to VAMCs. Compounding these problems is the lack of a process, including designated oversight roles, to ensure that the myriad of local policies established by VISNs and VAMCs are appropriately aligned with national policies. Collectively, if these issues persist, VHA will be unable to ensure that its policies are being consistently and effectively implemented as intended at the local level, potentially impacting veterans’ access to timely, safe, and high-quality care. We are making the following six recommendations to VHA: The Under Secretary for Health should further clarify when and for what purposes each national policy and guidance document type should be used, including whether guidance documents, such as program office memos, should be vetted and recertified. (Recommendation 1) The Under Secretary for Health should develop standard processes for consistently maintaining and disseminating guidance documents to each level of the organization. (Recommendation 2) The Under Secretary for Health should systematically obtain information on potential implementation challenges from VISNs and VAMCs and take the appropriate actions to address challenges prior to policy issuance. (Recommendation 3) The Under Secretary for Health should establish a mechanism by which program offices systematically obtain feedback from VISNs and VAMCs on national policy after implementation and take the appropriate actions. (Recommendation 4) The Under Secretary for Health should establish a standard policy exemption waiver process and centrally track and monitor approved waivers. (Recommendation 5) The Under Secretary for Health should establish a standard process, including designated oversight roles, to periodically monitor that local policies established by VISNs and VAMCs align with national policies. (Recommendation 6) VHA provided written comments on a draft of this report. In its comments, reproduced in appendix I, VHA concurred with all of our recommendations except one, which it concurred with in principle citing that the recommendation is no longer needed because VHA has already taken steps to address it. Specifically, VHA requested that we close our recommendation that the agency systematically obtain information on potential implementation challenges with national policy because ORAA has instituted new policy development processes that allow VHA employees and program offices to provide feedback on national policy prior to issuance or recertification. However, VHA added that its pre-policy form—which would require program offices to provide key information on draft national policies—will not be rolled out until January 2019 due to the need to ensure that sufficient systems are in place to obtain cost and performance data and to conduct an implementation analysis. The pre-policy form will serve as a mechanism to systematically collect information about national policies prior to issuance and will require program offices to provide information on a policy’s purpose, whether it conflicts with other VHA policy, implementation metrics, resources needed, a cost analysis, and a communications plan for VISNs and VAMCs. While VHA’s more recent efforts to obtain feedback on national policies are a step in the right direction, these efforts are not systematic because they rely on employee and program office participation. Consequently, we cannot close this recommendation until the pre-policy form has been implemented. VHA also provided specific information about implementing each of the remaining recommendations and stated that its target completion date for implementing these recommendations is the third quarter of fiscal year 2018. However, VHA noted challenges related to its ability to implement our recommendations regarding guidance documents as it has never attempted a systematic effort to align national guidance under a single process or gather these documents in a central location. In addition, VHA stated that adequate staffing continues to be an obstacle and that information technology needs must be met to ensure the proper dissemination and maintenance of these documents. As we have noted in our high-risk work, capacity and resource allocation challenges continue to impede VHA’s ability to address our concerns, and will continue to act as barriers until they are adequately addressed. VHA also provided a technical comment, which we incorporated. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Under Secretary for Health, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix II. Debra A. Draper, (202) 512-7114 or [email protected]. In addition to the contact named above, Bonnie Anderson (Assistant Director), E. Jane Whipple (Analyst-in-Charge), and Ashley Dixon made key contributions to this report. Also contributing were Jennie F. Apter, Jacquelyn Hamilton, Vikki Porter, and Brian Schmidt. | GAO was asked to conduct a management review of VHA; this is the sixth report in the series. In this review of VHA's policy management, GAO examines the extent to which (1) VHA has implemented its new definitions for national policy and guidance documents; (2) VHA ensures that national policy and guidance documents are accessible to VISNs and VAMCs; (3) VHA collects information on local challenges with implementing national policy, including the exemptions granted when policy requirements cannot be met; and (4) local policies are developed and maintained by VISNs and VAMCs, and whether they are aligned with national policies. GAO reviewed agency documentation, including VHA's revised directive on policy management. GAO also interviewed VHA officials involved with policy improvement efforts, as well as officials from a nongeneralizable sample of four national program offices, four VISNs, and eight VAMCs selected to provide geographic variation, among other factors. The Veterans Health Administration (VHA)—within the Department of Veterans Affairs (VA)—is taking steps to align existing national policy documents with newly revised definitions that streamline and clarify document use. According to the new definitions in its June 2016 directive on policy management, directives and notices are now the sole documents for establishing national policy; other types of documents, such as program office memos, are considered guidance. VHA is reviewing about 800 existing national policy documents to eliminate those that no longer meet its new definitions, and to rescind or recertify those that are outdated. At this time, VHA is not planning to review guidance documents, such as program office memos and standard operating procedures, to assess whether they align with its updated directive, because there is no central repository for these documents and it would be too resource intensive to locate all of them. Further, GAO's review found—contrary to VHA's updated directive—that program offices are continuing to use memos to issue policy. The continued use of program office memos to establish national policy undermines VHA's efforts to improve its policy management. VHA has a standard process for making national policy documents accessible to VA medical centers (VAMC) and the Veterans Integrated Service Networks (VISN) to which the medical centers report, but lacks a process for making guidance documents accessible. VHA makes national policy documents accessible to all organizational levels through a publications website and e-mail distribution list as outlined in its June 2016 directive. However, GAO found that VHA has not established a similar process for program offices to make guidance documents accessible at the local level. Specifically, there is no central repository, such as a publications website, for guidance documents, and the program offices do not track or consistently disseminate the guidance documents they issue. Without a standard process for consistently maintaining and disseminating guidance, VHA lacks assurance that staff receive and follow the same guidance, as intended. VHA does not routinely collect information on local challenges with national policy implementation or on exemption waivers. The four VISNs and eight VAMCs in GAO's review reported various challenges they face when implementing national policy, such as resource constraints and undefined time frames. In instances where VAMCs cannot meet policy requirements, program offices may approve policy exemption waivers on an ad hoc basis. However, GAO found that VHA lacks complete information on approved policy exemption waivers because it does not have a standard process for approving, tracking, and reassessing them. In recognition of this issue, VHA established a committee to develop a waiver process in June 2017. VISNs and VAMCs in GAO's review develop and maintain various local policies, but VHA does not ensure that they align with national policies. Specifically, GAO found that VHA does not have a process for program offices to systematically ensure that local policies align with national policies. Without such a process, VHA may continue to experience inconsistent policy implementation across its health care system. GAO is making six recommendations to VHA, which include clarifying national policy and guidance documents, ensuring access to guidance documents, incorporating local feedback into national policy, establishing a process to approve and track policy exemption waivers, and ensuring alignment of local and national policy. VHA generally concurred with GAO's recommendations. |
Between fiscal years 2003 and 2007 the unified budget deficit declined. Certainly declining deficits are better than rising deficits. But this decline in the unified deficit is not an indicator that our challenge has eased. First, even this short-term deficit is understated: It masks the fact that the federal government has been using the Social Security surplus to offset spending in the rest of government for many years. If we exclude that Social Security surplus, the on-budget deficit—what I call the operating deficit—in fiscal year 2007 was more than double the size of the unified deficit. For example, the Department of the Treasury (Treasury) reported a unified deficit of $163 billion and an on-budget deficit of $344 billion in fiscal year 2007. The accrual-based net operating deficit reported in the Financial Report of the United States Government was also significantly higher than the unified deficit—$276 billion for fiscal year 2007. This measure provides more information on the longer-term implications of today’s policy decisions and operations than does either cash-based figure, but it too offers an incomplete picture of the long-term fiscal challenge. As we recently reported, several countries have begun preparing fiscal sustainability reports to help assess the implications of their public pension and health care programs and other challenges in the context of overall sustainability of government finances. European Union members also annually report on longer-term fiscal sustainability. The goal of these reports is to increase public awareness and understanding of the long-term fiscal outlook in light of escalating health care cost growth and population aging, to stimulate public and policy debates, and to help policymakers make more informed decisions. These countries used a variety of measures, including projections of future revenue and spending and summary measures of fiscal imbalance and fiscal gaps, to assess fiscal sustainability. Last year, we recommended that the United States should prepare and publish a long-range fiscal sustainability report every 2 to 4 years. Despite these improvements in short-term deficits, the long-term outlook continued to move in the wrong direction. Even in 2001—in a time of annual surpluses—GAO’s long-term simulations showed a long-term challenge, but at that time it was more than 40 years out. Although an economic slowdown, decisions driven by the attacks of 9/11, and the need to respond to natural disasters have contributed to the change in outlook, they do not account for the dramatic worsening in the long-term outlook since 2001. Subsequent tax cuts and the passage of the Medicare prescription drug benefit in 2003 were also major factors, but they are not the only actions that challenge fiscal discipline. For example, one might also question the current farm bill in the face of reported record farm income. As the Committee knows, CBO’s latest projections show the deficit rising in response to a weakening economy. Neither this increase nor the recent declines tell us much about our long-term path. Rather, our long-term path must inform how we deal with the near-term weakness. Our real challenge then is not this year’s deficit or even next year’s; it is how to change our current path so that growing deficits and debt levels do not swamp our ship of state. Health care costs are still growing much faster than the economy and our population is still aging. The retirement of the baby boom generation and the rising health care costs will soon place unprecedented and long-lasting stress on the federal budget, raising debt held by the public to unsustainable levels. Figure 1 shows GAO’s simulation of the deficit path based on recent trends and policy preferences. In this we assume that the expiring tax cuts are extended through 2017—and then revenues are brought to their historical level as a share of gross domestic product (GDP)—that discretionary spending grows with the economy and no structural changes are made to Social Security, Medicare, or Medicaid. Rapidly rising health care costs are not simply a federal budget problem; they are our nation’s number one fiscal challenge. As shown in figure 2, GAO’s fiscal model demonstrates that state and local governments— absent policy changes—will also face large and growing fiscal challenges beginning within the next few years. As is true for the federal budget, growth in health-related spending—Medicaid and health insurance for state and local employees and retirees—is the primary driver of the fiscal challenges facing the state and local governments. For the federal government increased spending and rising deficits will drive a rising debt burden. At the end of fiscal year 2007, debt held by the public exceeded $5.0 trillion. Figure 3 shows that this growth in our debt cannot continue unabated without causing serious harm to our economy. But this is only part of the story. The federal government has been spending the surpluses in the Social Security and other trust funds for years; if we include debt held by those funds, our total debt is much higher—$9.0 trillion. On September 29, 2007, the statutory debt limit had to be raised for the third time in 4 years; between the end of fiscal year 2003 and the end of fiscal year 2007 the debt limit had to be increased by one-third. Although borrowing by one part of the federal government from another may not have the same economic and financial implications as borrowing from the public, it represents a claim on future resources and hence a burden on future taxpayers and the future economy. As alarming as the size of our current debt is, it excludes many items, including the gap between future promised and funded Social Security and Medicare benefits, veterans’ health care, and a range of other commitments and contingencies that the federal government has pledged to support. If these items are factored in, the total burden in present value dollars is estimated to be about $53 trillion. I know it is hard to make sense of what “trillions” means. One way to think about it is this: Imagine we decided to put aside and invest today enough to cover these promises tomorrow. It would take approximately $455,000 per American household—or $175,000 for every man, woman, and child in the United States. Clearly, despite some progress in addressing our short-term deficits, we have not made progress on our long-term fiscal challenge. In fact, we have lost and continue to lose ground absent meaningful action (see fig. 4). Although Social Security is a major part of the fiscal challenge, it is far from our biggest challenge. Spending on Medicare and Medicaid represents a much larger, faster growing, and more immediate problem. In fact, the federal government’s obligations for Medicare Part D alone exceed the unfunded obligations for Social Security. Health care spending systemwide continues to grow at an unsustainable pace, eroding the ability of employers to provide coverage to their workers and undercutting their ability to compete internationally. Finally, despite spending far more of our economy on health care than other nations, the United States has above average infant mortality, below average life expectancy, and the largest percentage of uninsured individuals. In short, our health care system is badly broken. Medicare and Medicaid spending threaten to consume an untenable share of the budget and economy in the coming decades. The federal government has essentially written a “blank check” for these programs. In contrast, other industrialized nations have put their health care programs on a budget, even ones with national health care plans. We should consider imposing limits on federal spending for health care sooner rather than later. Figure 5 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Although Social Security in its current form will grow from 4.2 percent of GDP today to 6.3 percent in 2080, Medicare and Medicaid’s burden on the economy will almost quadruple—from 4.7 percent to 17.7 percent of the economy. Unlike Social Security, which grows larger as a share of the economy and then levels off, Medicare and Medicaid continue to grow during this projection period. Furthermore, these projections assume growth in Medicare and Medicaid spending of GDP per capita plus about 1 percent on average—a rate that is significantly below recent historical experience of about 2.5 percent above GDP per capita. But even with this “optimistic” assumption, the outlook is daunting. It is clear that health care is the main driver of our long-term challenge. In fact, if there is one thing that could bankrupt America, it’s runaway health care costs. We must not allow that to happen. Changing the path of health care spending is much more complicated than dealing with Social Security. Unlike Social Security, Medicare spending growth rates reflect not only a burgeoning beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. The growth of medical technology has contributed to increases in the volume and complexity of health care services, and information on the cost and quality of health care is not readily available. Public and private health care spending continues to rise because of increased medical prices and increased utilization due to growth in the number, or volume, of services per capita, and use of more intense, or complex, services. Moreover, the actual costs of health care consumption are not transparent. Consumers are largely insulated by third-party payers from the cost of health care decisions. As shown in figure 6, total health care spending is absorbing an increasing share of our nation’s GDP. From 1976 through 2006, total public and private spending on health care grew from about 8 percent to 16 percent of GDP. Total health care spending is projected to grow to about 20 percent of GDP by 2016. Addressing the unsustainability of health care costs is a major competitiveness and societal challenge that calls for us as a nation to fundamentally rethink how we define, deliver, and finance health care in both the public and the private sectors. A major difficulty is that our current system does little to encourage informed discussions and decisions about the costs and value of various health care services. These decisions are very important when it comes to cutting-edge drugs and medical technologies, which can be very expensive but offer no advantage over their alternatives. Medical technology is a major contributor to growth in health care spending. For example, one study found that the average amount spent per heart attack case increased nearly $10,000 per case after controlling for inflation, or 4.2 percent real growth per year between 1984 and 1998. Nearly half of the cost increases resulted from people getting more intensive technologies—such as cardiac catheterization—over time. In some cases, new technology can lead to overdiagnosis and the excessive use of resources. One study cites the use of spinal magnetic resonance imaging (MRI) as one example. Researchers find that diagnostic spinal MRI sometimes reveals abnormalities having no clinical relevance. According to the study, some physicians act on this information and perform unnecessary surgery that can lead to complications. Obesity, smoking, and other population risk factors can lead to expensive chronic conditions; the increased prevalence of such conditions—for example, diabetes and heart disease—drives growth in the utilization of health care resources and therefore in spending. Obesity has been the subject of several recent studies focusing on associated health care cost increases. For example, one study attributes 27 percent of the growth in inflation-adjusted per capita spending between 1987 and 2001 to the rising prevalence of obesity and higher relative per capita spending among obese individuals. Both public and private payers face fundamental challenges in the struggle to contain health care spending growth. One of the challenges involves the unbridled use of technology and society’s unmanaged expectations. Experts note that the nation’s general tendency is to treat patients with available technology when there is the slightest chance of benefit to the patient, even though the costs may far outweigh the benefit to society as a whole. They note that the discipline of technology assessment has not kept pace with technology advancements. Today’s employers, which finance a substantial share of the health care of the privately insured population, are seeking more information on health care technology costs and benefits. Although the Food and Drug Administration (FDA), for example, evaluates new medical products based on safety and efficacy data submitted by manufacturers, it does not evaluate whether the new products are cost-effective compared with existing products used for the same treatment indications. In turn, Medicare, which generally relies on FDA approval decisions, does not evaluate whether new technologies are superior, either clinically or economically, compared with technologies already covered and paid for by the program. Further exacerbating the situation, consumers, spurred by advertising and the Internet, demand access to new medical technology without knowledge of its value, safety, or efficacy. Another cost containment challenge for all payers relates to the market dynamics of health care compared with other economic sectors. In an ideal market, informed consumers prod competitors to offer the best value. However, without reliable comparative information on medical outcomes, quality of care, and cost, consumers are less able to determine the best value. Insurance masks the actual costs of goods and services, providing little incentive for consumers to be cost-conscious. Similarly, clinicians must often make decisions in the absence of universal medical standards of practice. Under these circumstances, medical practices vary across the nation, as evidenced by wide geographic variation in per capita spending and outcomes, even after controlling for patient differences in health status. In recent years, policy analysts have discussed a number of incremental reforms aimed at moderating health care spending, in part by unmasking health care’s true costs. Some call for devising new insurance strategies to make health care costs more transparent to patients. Currently, many insured individuals pay relatively little out of pocket for care at the point of delivery because of comprehensive health care coverage—precluding the opportunity to sensitize these patients to the cost of their care. Other steps include reforming the policies that give tax preferences to insured individuals and their employers. These policies permit the value of employees’ health insurance premiums to be excluded from the calculation of their taxable earnings and exclude the value of the premium from the employers’ calculation of payroll taxes for both themselves and employees. Tax preferences also exist for health savings accounts and other consumer-directed plans. These tax exclusions represent a significant source of forgone federal revenue and work at cross-purposes to the goal of moderating health care spending. Proposals have been made to better target tax preferences to low-income individuals and to change the tax treatment to allow consumers the same tax advantages whether they receive their health insurance through their employers or purchase it on their own. As figure 7 shows, in 2006 the tax expenditure responsible for the greatest revenue loss was that for the exclusion of employer contributions for employees’ insurance premiums and medical care. Another area conducive to incremental change involves provider payment reforms. These reforms are intended to induce physicians, hospitals, and other health care providers to improve on quality and efficiency. For example, studies of Medicare patients in different geographic areas have found that despite receiving a greater volume of care, patients in higher use areas did not have better health outcomes or experience greater satisfaction with care than those living in lower use areas. Public and private payers are experimenting with payment reforms designed to foster the delivery of care that is proven to be both better clinically and more cost-effective. Ideally, identifying and rewarding efficient providers and encouraging inefficient providers to emulate best practices will result in better value for the dollars spent on care. The development of uniform standards of practice could lead to more cost-effective treatments designed to achieve the same outcomes. The problem of escalating health care costs is complex because addressing federal programs such as Medicare and the federal-state Medicaid program will need to involve change in the health care system of which they are a part—not just within federal programs. This will be a major societal challenge that will affect all age groups. Because our health care system is complex, with multiple interrelated pieces, solutions to health care cost growth are likely to be incremental and require a number of extensive efforts over many years. In my view, taking steps to address the health care cost dilemma systemwide puts us on the right path for correcting the long-term fiscal problems posed by the nation’s health care entitlements. I have suggested in the past that we consider four elements as pillars of any major health care reform effort: Provide universal access to basic and essential health care. Impose limits on federal spending for health care. Implement national, evidence-based medical practice standards to improve quality, control costs, and reduce litigation risks. Take steps to ensure that all Americans assume more personal responsibility and accountability for their own health and wellness. As a nation, we need to weigh unlimited individual wants against broader societal needs and decide how responsibility for financing health care should be divided among employers, individuals, and government in an affordable and sustainable manner. Ultimately, we may need to define a set of basic and essential health care services to which every American is ensured access. Individuals wanting additional services, and insurance coverage to pay for them, would have that choice but would be required to allocate their own resources. Clearly, such a dramatic change would require a long transition period—all the more reason to act sooner rather than later. As we enter 2008, what we call the long-term fiscal challenge is not in the distant future. In fact, the first baby boomers already have filed for early retirement benefits and will be eligible for Medicare benefits in less than 3 years. The budget and economic implications of the baby boom generation’s retirement have already become a factor in CBO’s 10-year baseline projections and that impact will only intensify as the baby boomers age. As the share of the population over 65 climbs, demographics will interact with rising health care costs. The longer we wait, the more painful and difficult the choices will become. Simply put, our nation is on an imprudent and unsustainable long-term fiscal path that is getting worse with the passage of time. The financial markets are noticing. Approximately 3 years ago, Standard and Poor’s issued a publication stating that absent policy changes, the U.S. government’s debt-to-GDP ratio was on track to mirror ratios associated with speculative-grade sovereigns. Within the last month, Moody’s Investors Service issued its annual report on the United States. In that report, they noted their concern that absent Medicare and Social Security reforms, the long-term fiscal health of the United States and our current Aaa bond rating were at risk. These not too veiled comments serve to note the significant longer-term interest rate risk that we face absent meaningful action to address our longer-range challenge as well. Higher longer-term interest costs would only serve to complicate our fiscal, economic, and other challenges in future years. As you are aware, during the past 3 years, I have traveled to 25 states as part of the Fiscal Wake-Up Tour. During the tour, it has become clear that the American people are starved for two things from their elected officials—truth and leadership. Last fall, I was pleased to join you when you announced your proposal to create a Bipartisan Task Force for Responsible Fiscal Action. As I said at the time, I believe it offers one potential means to achieve an objective we all should share: taking steps to make the tough choices necessary to keep America great and to help make sure that our country’s, children’s, and grandchildren’s future is better than our past. By introducing your proposal to create a Bipartisan Task Force for Responsible Fiscal Action, you have shown the kind of leadership that is essential for us to successfully address the long-term fiscal challenge that lies before us. And I want to note you are not alone. Several other members on both sides of the political aisle and on both sides of Capitol Hill have also introduced legislation seeking to accomplish similar objectives. But we do need to act. The passage of time is shrinking the window for action. Albert Einstein said the most powerful force in the universe is compound interest and today the miracle of compounding is working against us. After 2009 the Social Security cash surplus—which has cushioned and masked the impact of our imprudent fiscal policy—will begin to shrink, putting pressure on the rest of the budget. The Medicare Hospital Insurance trust fund is already in a negative cash flow situation. I hope we do not wait to act until the Social Security trust fund turns to negative cash flow in 2017. Demographics narrow the window for other reasons as well. People need time to prepare for and adjust to changes in benefits. There has been general agreement that there should be no change in Social Security benefits for those currently in or near retirement. If we wait until the baby boom generation has retired, that becomes much harder and much more expensive. Mr. Chairman, Senator Gregg, Members of the Committee, meeting this long-term fiscal challenge overarches everything. It is our nation’s largest sustainability challenge, but it is not our only one. If we want to position the United States to meet the challenges of this century both abroad and at home, we must also tackle other challenges, including reexamining what government does and how it does business. Last month, we published a new report that lays out a possible path for change. The report is entitled A Call for Stewardship: Enhancing the Federal Government’s Ability to Address Key Fiscal and Other 21st Century Challenges. It provides 13 potential tools for Congress and the administration to use to begin to confront our long-term fiscal and other challenges. I hope you find this report useful in facilitating discussions and decisions about various challenges facing our great nation in the 21st century. Today it is understandable that many Americans and their elected representatives are concerned about recent market declines and a slowing economy. We have an obligation, however, to look at both the short term and the long term. Whatever Congress and the President decide to do in response to our current economic weakness, it is important to be mindful of the danger posed by our long-term fiscal path. This long-term challenge increases the importance of careful design of any stimulus package—it should be timely, targeted, and temporary. Budgets, deficits, and long-term fiscal and economic outlooks are not just about numbers, they are also about values. It is time for all Americans, especially baby boomers to recognize our collective stewardship obligation for the future. In doing so, we need to act soon because time is working against us. We must make choices that may be difficult and unpleasant today to avoid passing an even greater burden on to future generations. Let us not be the generation that sent the bill for its conspicuous consumption to its children and grandchildren. Thank you Mr. Chairman, Mr. Gregg, and Members of the Committee for having me today. We at GAO, of course, stand ready to assist you and your colleagues as you tackle these important challenges. For further information on this testimony, please contact Susan J. Irving at (202) 512-9142 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include Jay McTigue, Assistant Director, and Melissa Wolf. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO has for many years warned that our nation is on an imprudent and unsustainable fiscal path. During the past 3 years, the Comptroller General has traveled to 25 states as part of the Fiscal Wake-Up Tour. Members of this diverse group of policy experts agree that finding solutions to the nation's long-term fiscal challenge will require bipartisan cooperation, a willingness to discuss all options, and the courage to make tough choices. At the request of Chairman Conrad and Senator Gregg, the Comptroller General discussed the long-term fiscal outlook, our nation's huge health care challenge, and the shrinking window of opportunity for action. As we enter 2008, what we call the long-term fiscal challenge is not in the distant future. Already the first members of the baby boom generation have filed for early Social Security retirement benefits--and will be eligible for Medicare in only 3 years. Simulations by GAO, the Congressional Budget Office (CBO), and others all show that despite a 3-year decline in the budget deficit, we still face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. Under any plausible scenario, the federal budget is on an imprudent and unsustainable path. Rapidly rising health care costs are not simply a federal budget problem; they are our nation's number one fiscal challenge. Growth in health-related spending is the primary driver of the fiscal challenges facing the state and local governments. Unsustainable growth in health care spending is a systemwide challenge that also threatens to erode the ability of employers to provide coverage to their workers and undercut our ability to compete in a global marketplace. Addressing the unsustainability of health care costs is a societal challenge that calls for us as a nation to fundamentally rethink how we define, deliver, and finance health care in both the public and the private sectors. The passage of time has only worsened the situation: the size of the challenge has grown and the time to address it has shrunk. The longer we wait the more painful and difficult the choices will become, and the greater the risk of a very serious economic disruption. It is understandable that the Congress and the administration are focused on the need for a short-term fiscal stimulus. However, our long-term challenge increases the importance of careful design of any stimulus package--it should be timely, targeted, and temporary. At the same time, creating a capable and credible commission to make recommendations to the next Congress and the next president for action on our longer-range and looming fiscal imbalance is called for. |
The protection of the nation’s critical infrastructure against natural and man-made catastrophic events has been a concern of the federal government for over a decade. For example, in May 1998, Presidential Decision Directive 63 (PDD-63) established critical infrastructure protection as a national goal and presented a strategy for cooperative efforts by the government and the private sector to protect it. In December 2003, HSPD-7 was issued, defining responsibilities for DHS and federal agencies responsible for addressing specific critical infrastructure sectors. These agencies are to identify, prioritize, and coordinate the protection of critical infrastructure to prevent, deter, and mitigate the effects of attacks. DHS is to, among other things, coordinate national critical infrastructure protection efforts, establish uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across sectors; and provide for the sharing of information essential to critical infrastructure protection. According to the NIPP, DHS is also to develop and implement comprehensive risk management programs and methodologies, develop cross-sector and cross-jurisdictional protection guidance, recommend risk management and performance criteria and metrics within and across sectors, and establish structures to enhance the close cooperation between the private sector and government at all levels. In addition, DHS is the focal point for the security of cyberspace— including analysis, warning, information sharing, vulnerability reduction, mitigation and recovery efforts for public and private critical infrastructure information systems. To accomplish this mission, DHS is to work with other federal agencies, state and local governments, and the private sector. Federal policy further recognizes the need to prepare for debilitating Internet disruptions and—because the vast majority of the Internet infrastructure is owned and operated by the private sector—tasks DHS with developing an integrated public/private plan for Internet recovery. HSPD-7 designated sector-specific agencies for each of the critical infrastructure sectors, responsible for coordinating and collaborating with relevant federal agencies, state and local governments, and the private sector, and facilitating the sharing of information about threats, vulnerabilities, incidents, potential protective measures, and best practices. Agencies must submit an annual report to DHS on their efforts. DHS serves as the sector-specific agency for 10 of the sectors: information technology; telecommunications; transportation systems; chemical; emergency services; commercial nuclear reactors, material, and waste; postal and shipping; dams; government facilities; and commercial facilities. (See table 1 for a list of sector-specific agencies and a brief description of each sector). Under the NIPP, the sector-specific agencies, in coordination with their respective government and private sector councils, are responsible for developing individual protection plans for their sectors that, among other things, (1) define the security roles and responsibilities of members of the sector, (2) establish the methods that members will use to interact and share information related to protection of critical infrastructure, (3) describe how the sector will identify its critical assets, and (4) identify the approaches the sector will take to assess risks and develop programs to protect these assets. DHS is to use these individual plans to evaluate whether any gaps exist in the protection of critical infrastructures on a national level and, if so, to work with the sectors to address these gaps. All of the sectors have established government councils, and voluntary private sector councils under the NIPP model have been formed for all sectors except transportation systems. The nature of the 17 sectors varies and council membership reflects this diversity, but the councils are generally comprised of representatives from the various federal agencies with regulatory or other interests in the sector, some state and local officials with purview over the sectors, and asset owners and operators. Because some of the councils are newer than others, council activities vary based on the council’s maturity and other characteristics, with some younger councils focusing on establishing council charters, while more mature councils focused on developing protection strategies. Seven sectors had not formed either a government council or sector council until after publication of an Interim NIPP in February 2005, while 10 of the sectors had done so. These 10 sectors said they recognized the need to collaborate to address risks and vulnerabilities that could result in economic consequences for their sectors. For example, prior to the development of the NIPP, DHS and the Department of Agriculture had (1) established a government coordinating council for the agriculture and food sector to coordinate efforts to protect against agroterrorism, and (2) helped the agriculture and food sector establish a private sector council to facilitate the flow of alerts, plans, and other information. As of March 2007, the transportation systems sector had yet to form a sector council, but a DHS Infrastructure Protection official said each transportation mode— such as rail, aviation, and maritime—had established a sector council. According to DHS officials, once the modes are organized, the transportation systems council will be formed. Transportation Security Administration (TSA) officials attributed the delay to the heterogeneous nature of the transportation sector—ranging from aviation to shipping to trucking. The composition, scope, and nature of the 17 sectors themselves vary significantly, and the memberships of their government and sector councils reflect this diversity. The enormity and complexity of the nation’s critical infrastructure require council membership to be as representative as possible of their respective sectors. As such, council leaders— government sector representatives and private council chairs—believe that their membership is generally representative of their sectors. Government councils include representatives from various federal agencies with regulatory or other interests in the sectors. For example, the chemical sector council includes officials with DHS; the Bureau of Alcohol, Tobacco, Firearms and Explosives; the Department of Commerce; the Department of Justice; the Department of Transportation; and the Environmental Protection Agency because each has some interest in the sector. Some government councils also include officials from state and local governments with jurisdiction over entities in the sector. Private sector council membership varies, reflecting the unique composition of entities within each, but is generally representative of a broad base of owners, operators, and associations—both large and small—within a sector. For example, members of the drinking water and water treatment systems sector council include national organizations such as the American Water Works Association and the Association of Metropolitan Water Agencies and also members of these associations that are representatives of local entities including Breezy Hill Water and Sewer Company and the City of Portland Bureau of Environmental Services. In addition, the commercial facilities sector council includes more than 200 representatives of individual companies spanning 8 different subsectors, including public assembly facilities; sports leagues; resorts; lodging; outdoor events facilities; entertainment and media; real estate; and retail. This provides the councils opportunities to build the relationships needed to help ensure critical infrastructure protection efforts are comprehensive. Council activities have varied based on the maturity of the councils. Because some of the councils are newer than others, council meetings have addressed a range of topics from agreeing on a council charter to developing industry standards and guidelines for business continuity in the event of a disaster or incident. For example, the commercial facilities government council, which formed in 2005, has held meetings to address operational issues—such as agreeing on a charter, learning what issues are important to the sector, learning about risk management tools, and beginning work on the sector-specific plan. Councils that are more mature have been able to move beyond these activities to address more strategic issues. For example, the banking and finance sector council, which formed in 2002, focused its efforts most recently on strengthening the financial system’s ability to continue to function in the event of a disaster or incident (known as “resilience”), identifying a structured and coordinated approach to testing sector resilience, and promoting appropriate industry standards and guidelines for business continuity and resilience. Government and sector council representatives most commonly cited long-standing working relationships between entities within their respective sectors and with the federal agencies that regulate them, the recognition among some sector entities of the need to share infrastructure information with the government and within the sector, and operational support from DHS contractors as factors that facilitated council formation. However, these representatives also most commonly identified several key factors that posed challenges to forming some of the councils, including (1) difficulty establishing partnerships with DHS because of issues including high turnover of its staff and DHS staff who lacked knowledge about the sector to which they were assigned, (2) hesitancy to provide sensitive information or industry vulnerabilities to the government due to concerns that the information might be publicly disclosed, and (3) lack of long-standing working relationships within the sector or with federal agencies. One of the factors assisting the formation of many of the government and sector councils was the existence of long-standing working relationships within the sectors and with the federal agencies that regulate them. Ten of the sectors had formed either a government council or private sector council that addressed critical infrastructure protection issues prior to publication of an Interim NIPP. In addition, according to government and sector council representatives, sectors in which the industries have been highly regulated by the federal government—such as the banking and finance sector as well as the commercial nuclear sector—were already used to dealing with the federal government on many issues. Therefore, forming a relationship between the government and the private sector and within the sector was not very difficult. The availability of DHS contractors that provided administrative and other assistance—such as meeting planning, developing materials, recording and producing minutes, delivering progress reports, and supporting development of governance documents—to the government and sector councils was a third facilitating factor cited by representatives of 13 government and 5 sector councils. For example, representatives of the emergency services sector council and the telecommunications sector council stated that some of the services were very helpful, including guidance the contractors provided on lessons learned from how other sector councils were organized. Council representatives with three government and eight private sector councils reported that they experienced problems forming their councils due to a number of challenges establishing partnerships with DHS. Specifically, these reported challenges included high turnover of staff, poor communications with councils, staff who were unfamiliar with the sector and did not understand how it works, shifting priorities that affected council activities, and minimal support for council strategies. DHS acknowledged that its reorganization resulted in staff turnover, but according to DHS’s Director of the Infrastructure Programs Office within the Office of Infrastructure Protection, this should not have affected formation since DHS has taken a consistent approach to implementing the partnership model and issuing guidance. However, the director acknowledged that continuing staff turnover could affect the eventual success of the partnerships because they are dependent on the interactions and developing trust. Continuity of government staff is a key ingredient in developing trusted relationships with the private sector. Representatives with six government and five sector councils noted that the private sector continues to be hesitant to provide sensitive information regarding vulnerabilities to the government as well as with other sector members due to concerns that, among other things, it might be publicly disclosed. For example, these representatives were concerned that the items discussed, such as information about specific vulnerabilities, might be subject to public disclosure under the Federal Advisory Committee Act and thereby be available to competitors or potentially make the council members subject to litigation for failure to publicly disclose any known threats or vulnerabilities. This issue continues to be a long-standing concern and one that contributed to our designating homeland security information sharing as a high-risk issue in January 2005. We reported then that the ability to share security-related information is critical and necessary because it can unify the efforts of federal, state, and local government agencies and the private sector in preventing or minimizing terrorist attacks. In April 2006, we reported that DHS continued to face challenges that impeded the private sector’s willingness to share sensitive security information with the government. In this report, we assessed the status of DHS efforts to implement the protected critical infrastructure information (PCII) program created pursuant to the Homeland Security Act. This program was specifically designed to establish procedures for the receipt, care, and storage of critical infrastructure information voluntarily submitted to the government. We found that while DHS created the program office, structure, and guidance, few private sector entities were using the program. Challenges DHS faced included being able to assure the private sector that such information will be protected and specifying who will be authorized to have access to the information, as well as to demonstrate to critical infrastructure owners the benefits of sharing the information. We concluded that if DHS were able to surmount these challenges, it and other government users may begin to overcome the lack of trust that critical infrastructure owners have in the government’s ability to use and protect their sensitive information. We recommended that DHS better define its critical infrastructure information needs and better explain how this information will be used. DHS concurred with our recommendations. In September 2006 DHS issued a final rule that established procedures governing the receipt, validation, handling, storage, marking, and use of critical infrastructure information voluntarily submitted to DHS. Four government and four sector council representatives stated that the lack of prior working relationships either within their sector or with the federal government created challenges in forming their respective councils. For example, the public health and health care sector struggled with creating a sector council that represented the interests of the sector because it is composed of thousands of entities that are not largely involved with each other in daily activities. According to the sector- specific agency representative of the Department of Health and Human Services (HHS), historically, there was relatively little collaboration on critical infrastructure protection-related issues among sector members. Despite these reported challenges, the public health and health care sector has been able to form a sector council that is in the early stages of organization. The commercial facilities sector, which also involves varied and often unrelated stakeholders nationwide, similarly reported that the disparities among stakeholders made forming a council challenging. This sector encompasses owners and operators of stadiums, raceways, casinos, and office buildings that have not previously worked together. In addition, the industries composing the commercial facilities sector did not function as a sector prior to the NIPP and did not have any prior association with the federal government. As a result, this sector council has been concentrating its efforts on identifying key stakeholders and agreeing on the scope of the council and its membership. Each of the 17 sectors provided a sector-specific plan to DHS by the end of December 2006, as required by the NIPP, according to DHS Infrastructure Protection officials. Representatives from both the government and sector councils cited factors that have facilitated the development of their plans—similar to those that facilitated development of their councils— most commonly citing pre-existing plans; historical relationships between the federal government and the private sector or across the private sector, and contractor support. Sector representatives most commonly reported that key challenges in drafting their plans were the late issuance of a final NIPP, which caused some sectors to delay work on their plans, the changing nature of DHS guidance on how to develop the plans, and the diverse make-up of sector membership. Sector-specific agencies met the deadline to complete their plans by December 2006, according to DHS Infrastructure Protection officials. The NIPP requires these plans to contain definitions of the processes the sectors will use to identify their most critical assets and resources as well as the methodologies they will use to assess risks, but not information on the specific protective measures that will be utilized by each sector. The NIPP also requires agencies to coordinate the development of plans in collaboration with their security partners represented by government and sector councils and provide documentation of such collaboration. To date, the level of collaboration between sector-specific agencies and the sector councils in developing the sector-specific plans has varied—ranging from soliciting stakeholder comments on a draft to jointly developing the plan. For example, TSA developed the transportation systems plan and solicited input from private sector stakeholders, while representatives of the energy sector council worked with the Department of Energy to draft the energy plan. Despite these differences, according to DHS Infrastructure Protection officials, all the sectors submitted their plans to DHS by the December 2006 deadline and DHS and other stakeholders are in the process of reviewing them. Sector representatives from the agriculture and food, banking and finance, chemical, and energy sectors said their sectors had already developed protection plans prior to the interim NIPP published in February 2005 because they had recognized the economic value in planning for an attack. These representatives said they were able to revise their previous plans to serve as the plans called for in the NIPP. For example, the Department of Energy, with input from the sector, had developed a protection plan in anticipation of the Year 2000 computer threat; Department of Energy officials noted that both this plan and the relationships established by its development have been beneficial in developing the protection plan for the energy sector. Similarly, the banking and finance sector council, which worked closely with the Department of Treasury, has had a critical infrastructure protection plan in place for the banking and finance sector since 2003 and planned to use it, along with other strategies, to fit the format required by the NIPP. Representatives from 13 government and 10 sector councils agreed that having prior relationships—either formally between the federal government and the private sector based on regulatory requirements, or informally within and across industries—facilitated sector-specific plan development. For example, a nuclear sector representative said that its regulator, the Nuclear Regulatory Commission, had already laid out clear guidelines for security and threat response that facilitated developing the sector’s plan. The drinking water and wastewater sector council representative said that its long-standing culture of sharing information and decades of work with the Environmental Protection Agency helped with plan development. Representatives from seven sector-specific agencies and five sector councils said that assistance from DHS officials or DHS contractors was also a factor that helped with plan development, such as research and drafting. For example, DHS contract staff assisted the Department of the Interior and DHS’s Chemical and Nuclear Preparedness and Protection Division in drafting the plans for the national monuments and icons and emergency services sectors, respectively. Representatives from the chemical, emergency services, nuclear, and telecommunications sector councils said that contractors hired by DHS were helpful as resources providing research or drafting services. Representatives from six government councils and six sector councils said that the delays in issuing a final NIPP and changing DHS sector-specific plan guidance contributed to delays in developing their sector plans. According to DHS, sectors had begun drafting their sector-specific plans following the issuance of initial plan guidance in April 2004. But, DHS issued revised guidance based, in part, on stakeholder comments a year later with new requirements, including how the sector will collaborate with DHS on risk assessment processes as well as how it will identify the types of protective measures most applicable to the sector. DHS then issued additional guidance in 2006 requiring that the plans describe how sector-specific agencies are to manage and coordinate their responsibilities. These changes required some sectors—such as dams, emergency services, and information technology—to make significant revisions to their draft plans. Representatives from these sectors expressed frustration with having to spend extra time and effort making changes to the format and content of their plans each time DHS issued new guidance. Therefore, they decided to wait until final guidance was issued based on the final, approved NIPP. In our current work, once we have access to these plans, it will be important to determine how these delays may have affected the quality, completeness, and consistency of the plans. However, some sectors found the changes in the NIPP and plan guidance to be improvements over prior versions that helped them prepare their plans. For example, representatives from the emergency services sector said that guidance became more specific and, thus, more helpful over time, and representatives from the national monuments and icons sector said that the DHS guidance has been useful. Representatives from the information technology, public health, energy, telecommunications, and transportation systems sectors, among others, had commented that the NIPP should emphasize resiliency—meaning how quickly can a key asset or resource begin operations after an incident—rather than protection measures, such as hiring guards, installing gates and similar actions. According to some of these representatives, it is impossible and cost- prohibitive to try to protect every asset from every possible threat. Instead, industries in these sectors prefer to invest resources in protecting the most critical assets with the highest risk of damage or destruction and to plan for recovering quickly from an event. Representatives from the telecommunications sector added that resiliency is especially important for interdependent industries in restoring services such as communications, power, the flow of medical supplies, and transportation as soon as possible. DHS incorporated the concept of resiliency into the final NIPP to address these concerns and continues to emphasize protection as well. As in establishing their councils, in developing their sector-specific plans, officials from three government councils and five sector councils said that their sectors were made up of a number of disparate stakeholders, making agreement on a plan more difficult. For example, the commercial facilities sector is composed of eight different subsectors of business entities that have historically had few prior working relationships. According to the government council representative, the magnitude of the diversity among these subsectors has slowed the process of developing a plan so that the sector only had an outline of its plan as of May 2006. Similarly, government and private council representatives of the agriculture and food sector indicated that the diversity of industries included in this sector such as farms, food-processing plants, and restaurants, each of which has differing infrastructure protection needs, has made developing a plan more difficult. To some extent, all sectors depend on cyber infrastructure to operate, such as using computers to control access at nuclear facilities. So, it is important that sectors include cybersecurity in their sector’s protection plan and programs. As the focal point for critical infrastructure protection, DHS has many cybersecurity-related responsibilities that are called for in law and policy. In 2005 and 2006, we reported that DHS had initiated efforts to address these responsibilities, but that more remained to be done. Specifically, in 2005, we reported that DHS had initiated efforts to fulfill 13 key cybersecurity responsibilities (shown in table 2), but it had not fully addressed any of them. For example, DHS established forums to foster information sharing among federal officials with information security responsibilities and among various law enforcement entities, but had not developed national threat and vulnerability assessments for cybersecurity. Since that time, DHS has made progress on its responsibilities—including the release of its NIPP—but none has been completely addressed. Moreover, in 2006, we reported that DHS had begun a variety of initiatives to fulfill its responsibility to develop an integrated public/private plan for Internet recovery, but that these efforts were not complete or comprehensive. For example, DHS had established working groups to facilitate coordination among government and industry infrastructure officials and fostered exercises in which government and private industry could practice responding to cyber events, but many of its efforts lacked time frames for completion and the relationships among its various initiatives are not evident. DHS faces a number of challenges that have impeded its ability to fulfill its cybersecurity responsibilities, including establishing effective partnerships with stakeholders, achieving two-way information sharing with stakeholders, demonstrating the value it can provide to private sector infrastructure owners, and reaching consensus on DHS’s role in Internet recovery and on when the department should get involved in responding to an Internet disruption. In addition, we reported that DHS faced a particular challenge in attaining the organizational stability and leadership it needed to gain the trust of other stakeholders in the cybersecurity world—including other government agencies as well as the private sector. In July 2005, DHS undertook a reorganization that established the position of the Assistant Secretary of Cyber Security and Telecommunications—in part to raise the visibility of cybersecurity issues in the department. In September 2006, DHS announced the appointment of an Assistant Secretary for Cyber Security and Telecommunications. Since the appointment, the Assistant Secretary has led efforts to ensure the inclusion of cybersecurity in each critical infrastructure sector’s sector specific plan. The Assistant Secretary has set priorities that include (1) preparing for and deterring attacks by encouraging entities, through implementation of the sector specific plans, to systematically assess their network vulnerabilities and take steps to fix them, (2) responding to cyber attacks of potentially national significance by leveraging operational expertise and building situational awareness and incident response capabilities of the government and private sector; and (3) building awareness about the responsibilities for securing networks across the public and private sectors. In addition to the National Cyber Security Division, the Assistant Secretary is also responsible for the National Communications System, which ensures continuity of communications and priority service for the government under conditions of national emergency, and the Office of Emergency Communications, established pursuant to the fiscal year 2007 DHS appropriations act. This office is responsible for developing a national strategy and technical assistance and outreach to state and local governments for ensuring operable and interoperable emergency communications capabilities for first responders. To strengthen DHS’s ability to implement its cybersecurity responsibilities and to resolve underlying challenges, GAO has made about 25 recommendations over the last several years. These recommendations focus on the need to (1) conduct important threat and vulnerability assessments, (2) develop a strategic analysis and warning capability for identifying potential cyber attacks, (3) protect infrastructure control systems, (4) enhance public/private information sharing, and (5) facilitate recovery planning, including recovery of the Internet in case of a major disruption. DHS concurred with most of the recommendations addressed to them. Together, the recommendations provide a high-level road map for DHS to use in working to improve our nation’s cybersecurity posture. While DHS has made progress in addressing some of these recommendations many things remain to be done. Until it addresses these recommendations, DHS will have difficulty achieving results in its role as the federal focal point for the cybersecurity of critical infrastructures— including the Internet. Table 3 shows our detailed recommendations. Critical infrastructure protection is vital to our national security, economic vitality, and public health. Yet a decade after focusing on improving our ability to protect our key assets and resources, progress has been mixed, as Katrina demonstrated. It showed that significant damage to critical infrastructure and key resources could disrupt the functioning of businesses and government alike, underscoring the need for the private and public sector to establish stronger partnerships and working relationships in order to take a coordinated approach to critical infrastructure protection. DHS has moved out by issuing the National Infrastructure Protection Plan as a guiding framework for a national effort, and is providing contractor, technical, and analytical support to sectors, among other things, to encourage progress. Likewise, some sectors—those who are more mature, have been regulated, are more homogeneous, or had economic incentives, such as the threat of Y2K—came together to collaborate, work effectively, and develop protection strategies, even before DHS established the national plan. But other sectors—those who have just been created, who have not worked with federal agencies in the past, who are not regulated but must volunteer to participate in the planning process, and who are large and diverse—face bigger challenges in achieving this coordination and rate of progress. Despite these challenges, each sector submitted a protection plan to DHS. However, DHS has yet to release them. Given the wide variance in the maturity of the sectors, the quality, comprehensiveness, completeness, and consistency of the plans remain to be seen. In addition, it is important to realize that in some cases, the sector specific plan is really more of a first step—a “plan to plan.” In other words, the sectors were only to describe how they expect to identify and prioritize critical assets, how they expect to assess their risks, vulnerabilities, threats and consequences, and how they will approach developing protection programs, not detail how they will implement them. Thus, fulfilling its statutory responsibilities for ensuring the nation’s critical infrastructure is protected will be a long-term commitment for DHS. This makes it even more important that DHS address challenges that our work has identified over the years and for which we have made a number of recommendations yet to be implemented, including our body of work assessing the protection of cyber infrastructure. These challenges include building trusted working relationships and better collaborating with states and localities, given that the infrastructure is in their communities, as well as the private sector, given that they own most of the assets and resources. Challenges also include providing the environment and incentives for the private sector to voluntarily share information with DHS on gaps in vulnerabilities and protective measures, information that the agency must have to be able to ensure assets and resources critical to the nation are protected. Challenges also include providing organizational stability and leadership, addressing employee turnover and gaps in expertise, and enhancing agency capabilities, such as for providing analysis and warning and identifying and assessing threats and vulnerabilities. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at any time. For further information on this testimony, please contact Eileen Larence at (202) 512-8777 or by e-mail at [email protected], or regarding cyber- critical infrastructure protection issues, David Powner at (202) 512-9286 or by e-mail at [email protected]. Individuals making key contributions to this testimony include Susan Quinlan, Assistant Director; Michael Gilmore; Landis Lindsey; and Edith Sohna. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As Hurricane Katrina so forcefully demonstrated, the nation's critical infrastructures--both physical and cyber--have been vulnerable to a wide variety of threats. Because about 85 percent of the nation's critical infrastructure is owned by the private sector, it is vital that the public and private sectors work together to protect these assets. The Department of Homeland Security (DHS) is responsible for coordinating a national protection strategy including formation of government and private sector councils as a collaborating tool. The councils, among other things, are to identify their most critical assets, assess the risks they face, and identify protective measures, in sector-specific plans that comply with DHS's National Infrastructure Protection Plan (NIPP). This testimony is based primarily on GAO's October 2006 sector council report and a body of work on cyber critical infrastructure protection. Specifically, it addresses (1) the extent to which these councils have been established, (2) key facilitating factors and challenges affecting the formation of the council, (3) key facilitating factors and challenges encountered in developing sector plans, and (4) the status of DHS's efforts to fulfill key cybersecurity responsibilities. GAO has made previous recommendations, particularly in the area of cybersecurity that have not been fully implemented. Continued monitoring will determine whether further recommendations are warranted. To better coordinate infrastructure protection efforts as called for in the NIPP, all 17 critical infrastructure sectors have established their respective government councils, and nearly all sectors have initiated their voluntary private sector councils. But council progress has varied due to their characteristics and level of maturity. For example, the public health and healthcare sector is quite diverse and collaboration has been difficult as a result; on the other hand, the nuclear sector is quite homogenous and has a long history of collaboration. As a result, council activities have ranged from getting organized to refining infrastructure protection strategies. Ten sectors, such as banking and finance, had formed councils prior to development of the NIPP and had collaborated on plans for economic reasons, while others had formed councils more recently. As a result, the more mature councils could focus on strategic issues, such as recovering after disasters, while the newer councils were focusing on getting organized. Council members reported mixed views on what factors facilitated or challenged their actions. For example, long-standing working relationships with regulatory agencies and within sectors were frequently cited as the most helpful factor. Challenges most frequently cited included the lack of an effective relationship with DHS as well as private sector hesitancy to share information on vulnerabilities with the government or within the sector for fear the information would be released and open to competitors. GAO's past work has shown that a lack of trust in DHS and fear that sensitive information would be released are recurring barriers to the private sector's sharing information with the federal government, and GAO has made recommendations to help address these barriers. DHS has generally concurred with these recommendations and is in the process of implementing them. All the sectors met the December 2006 deadline to submit their sector-specific plans to DHS, although the level of collaboration between the sector and government councils on the plans, which the NIPP recognizes as critical to establishing relationships between the government and private sectors, varied by sector. Issuing the NIPP and completing sector plans are only first steps to ensure critical infrastructure is protected. Moving forward to implement sector plans and make progress will require continued commitment and oversight. While DHS has initiatives under way to fulfill its many cybersecurity responsibilities, major tasks remain to be done. These include assessing and reducing cyber threats and vulnerabilities and coordinating incident response and recovery planning efforts. Effective leadership by the Assistant Secretary for Cyber Security and Telecommunications is essential to DHS fulfilling its key responsibilities, addressing the challenges, and implementing recommendations. |
Congress and the President first enacted a statutory limit on federal debt during World War I to eliminate the need for Congress to approve each new debt issuance and provide Treasury with greater discretion over how it finances the government’s day-to-day borrowing needs. With the Public Debt Act of 1941, Congress and the President set an overall limit of $65 billion on Treasury debt obligations that could be outstanding at any one time and since then have enacted a number of debt limit increases. Most recently, Congress and the President enacted the BCA, which established a process that resulted in debt limit increases in three increments—$400 billion in August 2011, $500 billion in September 2011, and $1,200 billion in January 2012—for a total increase of $2.1 trillion, raising the debt limit to $16.394 trillion. As shown in figure 1, the amount of reported outstanding debt subject to the limit has increased from $5,137 billion on September 30, 1996, to $15,730 billion on May 31, 2012. Debt subject to the limit includes both debt held by the public and intragovernmental debt holdings. Debt held by the public consists primarily of marketable Treasury securities, such as bonds, notes, bills, cash management bills (CM bills), and Treasury Inflation-Protected Securities (TIPS), which are sold through auctions and can be resold by Treasury also issues a smaller amount of whoever owns them.nonmarketable securities, such as savings securities and special securities for state and local governments. Debt held by the public primarily represents the amount the federal government has borrowed to finance cumulative cash deficits. Intragovernmental debt holdings represent balances of Treasury securities held in federal government accounts, such as the Social Security and Medicare trust funds. Intragovernmental debt increases when these accounts run a surplus or accrue interest on existing securities. The Secretary of the Treasury has several responsibilities related to the federal government’s financial management operations. These include paying the government’s obligations and investing the excess annual receipts (including interest earnings) over disbursements of federal government accounts with investment authority. To meet these responsibilities, the Secretary of the Treasury is authorized by law to issue the necessary securities to federal government accounts with investment authority for investment purposes and to borrow the necessary funds from the public to pay government obligations. normal conditions, Treasury is notified by the appropriate agency (such as the Office of Personnel Management for the Civil Service Retirement and Disability Fund (CSRDF)) of the amount that should be invested on its behalf, and Treasury then makes the investment. In some cases, the actual security that Treasury should purchase is also specified. When a federal government account with investment authority needs to make disbursements, Treasury is normally notified of the amount of securities that need to be redeemed. In some cases, Treasury is also notified to redeem specific securities. The Treasury securities issued to federal government accounts with investment authority count against the debt limit. If these accounts’ receipts are not invested, the amount of debt subject to the limit does not increase. The majority of securities held by federal government accounts are Government Account Series (GAS) securities. GAS securities consist of par value securities and market-based securities, with terms ranging from on demand out to 30 years. Par value securities are issued and redeemed at par (100 percent of face value), regardless of current market conditions. Market-based securities, however, can be issued at a premium or discount and are redeemed at par value on the maturity date or at market value if redeemed before the maturing date. Under normal circumstances, the debt limit is not an impediment to carrying out these investment responsibilities. However, when federal debt is near or at the debt limit, increasing the debt limit frequently involves lengthy debate by Congress. When delays occur, Treasury has to depart from normal cash and debt management operations to avoid exceeding the debt limit. In 1986 and 1987, after Treasury’s experiences during prior debt limit crises, Congress authorized the Secretary of the Treasury to use the CSRDF and the Government Securities Investment Fund of the Federal Employees’ Retirement System (G-Fund) to help Treasury manage federal debt when delays in raising the debt limit occur. Treasury has also taken other actions in the past to manage federal debt during such delays. Table 1 provides an overview of each action. We have previously reported on aspects of Treasury’s actions during the 2003 and 2002 debt issuance suspension periods (DISP), and the 1995- 1996 and other debt limit crises. In January 2011, Treasury determined that the debt limit of $14.294 trillion set in February 2010 would likely be reached by May 16, 2011. In May 2011, Treasury determined that it was necessary to use extraordinary actions to manage federal debt during the delay in raising the debt limit, which lasted through August 1, 2011. Treasury again determined that extraordinary actions were needed to manage federal debt in January 2012. Table 2 shows the significant events from January 6, 2011, through January 30, 2012, that relate to the debt limit. The extraordinary actions Treasury took during 2011 and January 2012 to manage federal debt when delays in raising the debt limit occurred were consistent with relevant authorizing legislation and regulations. These actions related to State and Local Government Series (SLGS) securities, and the CSRDF, Postal Service Retiree Health Benefits Fund (Postal Benefits Fund), G-Fund, and Exchange Stabilization Fund (ESF). For other major federal government accounts with investment authority, Treasury used its normal investment and redemption policies and procedures to handle receipts and maturing investments and to redeem Treasury securities. Treasury took the first extraordinary action on May 6, 2011, by suspending new issuances of SLGS securities. Prior to the suspension, the reported amount of SLGS securities outstanding was about $177.3 billion. This level declined to a reported amount of about $146.5 billion by August 1, 2011. On August 2, 2011, Treasury resumed the sale of SLGS securities. Treasury also converted SLGS demand deposit securities outstanding on May 6, 2011, to special 90-day certificates of indebtedness. On August 2, 2011, Treasury converted the special 90-day certificates of indebtedness back to demand deposits including accrued interest. Treasury maintained spreadsheets to track the certificates of indebtedness and the daily interest accruals. Treasury’s actions related to the SLGS demand deposit securities were in accordance with 31 C.F.R. Part 344.7 (b), which authorizes the Secretary of the Treasury to invest any unredeemed SLGS demand deposit securities in special 90-day certificates of indebtedness. Treasury did not use its authority to suspend new issuances of or convert SLGS securities during January 2012. The Secretary of the Treasury notified Congress that he had determined that a DISP existed for the CSRDF on May 16, 2011, after concluding that he would not be able to issue debt securities without exceeding the debt limit. On that day, Treasury redeemed certain investments held by the CSRDF earlier than normal and began suspending new investments of CSRDF receipts. Treasury did not use its authority to redeem or suspend investments of the CSRDF during January 2012. Subsection 8348(k) of title 5, United States Code, authorizes the Secretary of the Treasury to redeem securities or other invested assets of the CSRDF before maturity to prevent the amount of debt from exceeding the debt limit. The statute does not require that early redemptions be made only for the purpose of making CSRDF payments. Further, the statute permits early redemptions even if the CSRDF has adequate cash balances to cover such payments. However, the statute provides that the amount redeemed may not exceed the total amount of the payments authorized to be made from the CSRDF during the DISP. Treasury decided to redeem securities held by the CSRDF earlier than normal in accordance with subsection 8348(k)(1) of title 5, United States Code. To take such action, the Secretary of the Treasury must determine that a DISP exists and the length of the DISP. The statute authorizing the DISP and its legislative history are silent as to how to determine the length of a DISP. On May 16, 2011, the Secretary of the Treasury notified Congress that a DISP, as it relates to the CSRDF, would begin that day and would last through August 2, 2011. On May 16, 2011, Treasury redeemed about $17.1 billion of securities held by the CSRDF before maturity using its authority under subsection 8348(k)(1) of title 5, United States Code. The $17.1 billion redemption amount was determined based on (1) the length of the DISP (May 16, 2011, through August 2, 2011) and (2) the estimated monthly CSRDF benefit payments and expenses that would occur during that time.These were appropriate factors to use in determining the amount of Treasury securities held by the CSRDF to redeem early. From May 16, 2011, through July 31, 2011, about $11.8 billion of actual benefit payments and expenses occurred, leaving about $5.3 billion of uninvested principal from the $17.1 billion that had been redeemed early. On August 1, 2011, benefit payments were about $5.7 billion. As such, Treasury redeemed only the approximate $0.4 billion difference between the $5.3 billion uninvested principal amount and the actual amount of benefit payments to be made. Subsection 8348(j)(1) of title 5, United States Code, authorizes the Secretary of the Treasury to suspend additional investment of amounts in the CSRDF if the investment cannot be made without exceeding the debt limit. From May 16, 2011, through August 1, 2011, Treasury suspended about $86 billion of investments to the CSRDF. Of this amount, $63.1 billion related to securities that matured on June 30, 2011, and were to be reinvested; $17.4 billion was from the semiannual interest payment on June 30, 2011; and $5.5 billion represented cash receipts. Subsection 8909a(c) of title 5, United States Code, requires investments to be made for the Postal Benefits Fund in the same manner as investments for the CSRDF under section 8348. This includes the provisions authorizing the early redemption and suspension of investments. As discussed above for the CSRDF, the amount redeemed earlier than normal may not exceed the total amount of the payments authorized to be made during the DISP. Subsection 8906(g)(2)(A) of title 5, United States Code, authorizes payments to be made from the Postal Benefits Fund beginning after September 30, 2016. As such, Treasury did not redeem investments of the Postal Benefits Fund earlier than normal during 2011 and January 2012. On June 30, 2011, Treasury suspended about $9.5 billion of new investments to the Postal Benefits Fund. Of this amount, $8.7 billion related to securities that matured on June 30, 2011, and were to be reinvested, and $0.8 billion was from the semiannual interest payment on June 30, 2011. Treasury did not use its authority to suspend investments of the Postal Benefits Fund during January 2012. Subsection 8438(g)(1) of title 5, United States Code, authorizes the Secretary of the Treasury to suspend the issuance of additional amounts of investments to the G-Fund if the issuance cannot be made without causing the debt limit to be exceeded. On most days from May 16, 2011, through August 1, 2011, and each day from January 17, 2012, through January 27, 2012, Treasury did not fully invest the holdings of the G-Fund. Since the G-Fund invests in one-day securities that are redeemed and reinvested each business day, the amount of uninvested principal varied most days depending on the federal government’s outstanding debt. Although Treasury can accurately predict the outcome of some events that affect the outstanding debt, it cannot precisely determine the outcome of others until they occur. For example, the amount of Treasury securities that Treasury will issue to the public from an auction can be determined some days in advance because Treasury can control the amount that will be issued. On the other hand, the amount of savings bonds that will be issued and redeemed and the amount of Treasury securities that will be issued to, or redeemed by, various federal government accounts with investment authority are difficult to precisely predict. Because of these difficulties, Treasury needed to ensure that the normal investment and redemption activities associated with Treasury securities did not cause the debt limit to be exceeded while also maintaining normal investment and redemption policies for the majority of these accounts. To accomplish these objectives, for each day of the above-noted periods, Treasury calculated the amount of debt subject to the limit, excluding the receipts that the G-Fund would normally invest; determined the amount of G-Fund receipts that could safely be invested without exceeding the debt limit and invested this amount in Treasury securities; and suspended investment, when necessary, of the G-Fund’s remaining receipts. As of August 1, 2011, the business day prior to the debt limit increase, the G-Fund had approximately $137.5 billion available for suspension, with the entire amount suspended as of that date. As of January 27, 2012, the business day prior to the debt limit increase, the G-Fund had approximately $147.6 billion available for suspension, with about $36.9 billion suspended as of that date. The purpose of the ESF is to help provide a stable system of monetary exchange rates. The law establishing the ESF authorizes the Secretary of the Treasury to invest the ESF’s balances not needed for program purposes in Treasury securities. Section 5302 of title 31, United States Code, authorizes the Secretary of the Treasury to determine when, and if, excess funds for the ESF will be invested. On several occasions from July 15, 2011, through August 1, 2011, and each day from January 4, 2012, through January 27, 2012, Treasury did not fully invest the dollar- denominated portion of the ESF in Treasury securities. Since the ESF invests the dollar-denominated portion of the fund in one-day Treasury securities that are redeemed and reinvested each business day, the amount of uninvested principal varied several days, depending on the federal government’s outstanding debt. For each day, Treasury determined the amount of funds that the ESF would be allowed to invest in Treasury securities and, when necessary, suspended some investments of the ESF receipts and maturing securities that would have caused the debt limit to be exceeded. The process discussed above for the G-Fund was also used for the ESF. During the 2011 period, the ESF had approximately $22.8 billion available for suspension, with about $6.9 billion of this amount suspended as of August 1, 2011, the business day prior to the debt limit increase. During January 2012, the ESF had approximately $22.7 billion available for suspension, with the entire amount suspended as of January 17, 2012. The entire amount continued to be suspended each day through January 27, 2012, the business day prior to the debt limit increase. As a result of an error in calculating debt subject to the limit from November 2, 2011, through December 29, 2011, Treasury suspended an incorrect amount from the ESF from January 4, 2012, through January 10, 2012. A programming change to Treasury’s debt accounting system caused an incorrect calculation of unamortized discounts on Treasury bills to be subtracted from total debt outstanding in calculating debt subject to the limit. Treasury identified the error during a contingency operation on December 29, 2011. At that time, the cumulative effect of the error was $181 million. The error in the program was corrected immediately; however, the adjustment to correct the $181 million was not recorded until January 11, 2012. Debt subject to the limit was sufficiently below the debt limit from November 2, 2011, through January 3, 2012, such that if the error was taken into account, debt subject to the limit would still have been below the debt limit. Treasury began using the ESF to manage federal debt during the delay in raising the debt limit on January 4, 2012. To determine whether Treasury would have exceeded the debt limit from January 4, 2012, through January 10, 2012, absent this error, we reviewed the invested balances of the ESF during this period. Based on our review, we found that the ESF had sufficient invested balances that could have been used to manage federal debt during the delay. For example, as of January 10, 2012, cumulative investments totaling $12.306 billion had been suspended from the ESF. If the error had not occurred, cumulative investments totaling $12.487 billion would have been suspended from the ESF, $181 million more than what was actually suspended, but well below the approximate $22.7 billion available for suspension. Therefore, Treasury would have been able to suspend additional investments from the ESF to remain under the debt limit. As a result of overinvesting the ESF from January 4, 2012, through January 10, 2012, Treasury also overpaid interest to the ESF during this period. Treasury corrected the interest paid by making an adjustment of $402.63 on January 11, 2012. We analyzed major federal government accounts with investment authority for which Treasury stated it had followed its normal investment and redemption policies and procedures during the periods from May 16, 2011, through August 1, 2011, and from January 4, 2012, through January 27, 2012, to manage federal debt when delays in raising the debt limit occurred. Our analysis was intended to verify that Treasury’s actions to manage federal debt during such delays did not involve federal government accounts that Treasury is not authorized to use in such situations. We found that for all the accounts we reviewed, Treasury used its normal investment and redemption policies and procedures to handle receipts and maturing investments and to redeem Treasury securities. Table 3 lists the federal government accounts with investment authority included in our analysis. In accordance with relevant legislation and consistent with the timing of the debt limit increases authorized by the BCA, Treasury restored the uninvested principal amounts to the CSRDF, Postal Benefits Fund, and G-Fund, and invested the uninvested principal to the ESF totaling approximately $299.5 billion. This amount consisted of (1) $239.9 billion of uninvested principal relating to the period from May 16, 2011, through August 1, 2011, and (2) $59.6 billion relating to the period in January 2012, in which Treasury took extraordinary actions to manage federal debt when delays in raising the debt limit occurred. In accordance with legislation, Treasury also restored interest losses totaling approximately $933.8 million to the CSRDF, Postal Benefits Fund, and G-Fund. This amount consisted of (1) $916.9 million relating to the period from May 16, 2011, through August 1, 2011, and (2) $16.9 million relating to the period in January 2012. Treasury lacks legislative authority under section 5302 of title 31, United States Code, to restore interest losses to the ESF. Table 4 summarizes the amounts of principal and interest restored. Subsections 8348(j)(3) and (4) of title 5, United States Code, require Treasury to immediately restore, to the maximum extent practicable, the CSRDF’s Treasury holdings to the proper balances when a DISP ends and to restore lost interest on the next normal interest payment date. Treasury is required by subsection 8909a(c) of title 5, United States Code, to follow these same procedures for the Postal Benefits Fund. Consequently, Treasury took the following actions, with respect to these two funds, once the DISP for 2011 had ended: Treasury invested about $86 billion of uninvested principal to the CSRDF on August 2, 2011, which equaled the amount of new investments suspended during 2011. All of the $17.1 billion of Treasury securities held by the CSRDF that Treasury redeemed earlier than normal had been used for CSRDF benefit payments and expenses during the DISP. As such, there was no remaining amount required to be invested. Treasury invested about $9.5 billion of uninvested principal to the Postal Benefits Fund on August 2, 2011, which equaled the amount of new investments suspended during 2011. On December 30, 2011, Treasury paid the CSRDF about $516.9 million and the Postal Benefits Fund about $21.5 million to restore interest losses incurred because of the actions Treasury had taken during the DISP. Because December 30, 2011, was the first semiannual interest payment date since the DISP ended, this was the proper restoration date according to the statute authorizing the restoration. We verified that subsequent to the initiation and recording of these transactions, the CSRDF’s and Postal Benefits Fund’s holdings were, in effect, the same as they would have been had the DISP not occurred. On August 1, 2011, and January 27, 2012, the last business days before the debt limit was raised, the G-Fund had uninvested principal of about $137.5 billion and $36.9 billion, respectively. On August 2, 2011, and January 30, 2012, Treasury invested all uninvested principal for the G-Fund, as required by subsection 8438(g)(3) of title 5, United States Code. Treasury is also required by subsection 8438(g)(4) of title 5, United States Code, to make the G-Fund whole by restoring any losses once the suspension of debt has ended. During May through August 2011 and January 2012, interest losses to the G-Fund were about $378.5 million and $16.9 million, respectively, because its funds were not fully invested. On August 3, 2011, and January 30, 2012, Treasury fully restored the lost interest on the G-Fund’s uninvested funds. We verified that subsequent to the initiation and recording of these transactions, the G-Fund’s holdings were, in effect, the same as they would have been had the suspensions of debt not occurred. On August 1, 2011, and January 27, 2012, the last business days before the debt limit was raised, the ESF had uninvested principal of about $6.9 billion and $22.7 billion, respectively. On August 2, 2011, and January 30, 2012, Treasury invested all uninvested principal for the ESF. During May through August 2011 and January 2012, interest losses to the ESF were $55,630 and $284,691, respectively, because its funds were not fully invested. Treasury has the authority in section 5302 of title 31, United States Code, to invest principal of the ESF. However, the Secretary of the Treasury lacks legislative authority to restore any interest losses relating to the ESF incurred as a result of authorized actions taken by Treasury to manage federal debt when delays in raising the debt limit occur. We verified that Treasury properly invested the ESF’s uninvested principal and, in accordance with the law, did not restore interest losses. Congress usually votes on increasing the debt limit after fiscal policy decisions affecting federal borrowing have begun to take effect. Debt limit increases frequently involve lengthy debate, with the debates often occurring when federal debt is near or at the debt limit. We reported in February 2011 that managing debt when delays in raising the debt limit occur diverts Treasury’s resources away from other cash and debt management responsibilities and that Treasury’s borrowing costs modestly increased during debt limit debates in 2002, 2003, and 2010. As discussed below, increased borrowing costs also occurred during 2011 when there was a delay in raising the debt limit. For the January 2012 period, we found that there was no consistent pattern of yield spread changes and the changes in borrowing costs were negligible. This was expected given that the BCA provided for a future debt limit increase, which minimized uncertainty in the Treasury market. In addition, managing federal debt during such delays affected Treasury’s normal operations in 2011 and January 2012. Our analysis indicates that delays in raising the debt limit in 2011 led to increased borrowing costs on certain securities. We measured changes in Treasury’s borrowing costs when delays in raising the debt limit occurred in 2011 using a multivariate regression analysis of the daily yield spread—yields on private securities minus yields on Treasury securities of comparable maturities—between the debt limit event period and the previous 3 months, or pre-event period. Rates for Treasury and other securities fluctuate from day to day in response to changes in the broader economy. Focusing on a yield spread rather than changes in individual interest rates facilitated the measurement of changes in the relative risk of Treasury securities and the identification of potential risk premiums (which is represented by a decrease in the yield spread). We also controlled for other factors that could affect the yield spread, such as the Federal Reserve’s holdings of Treasury securities and economic uncertainty. (See app. II for more details on how we estimated increased borrowing costs.) The results of our multivariate regression analysis describe the change in yield spreads attributable to delays in raising the debt limit. The estimated increase or decrease in the yield spreads between the pre-event and event periods is shown in figure 2. A decrease in the yield spread indicates that the market perceives the risk of Treasury securities to be closer to that of private securities, increasing the cost to Treasury. Conversely, an increase in the yield spread indicates that the market perceives the risk of Treasury securities to have decreased relative to that of private securities, making the securities less costly to Treasury. We found that the 2011 debt limit event led to a premium on Treasury securities with maturities of 2 years or more while Treasury securities with shorter maturities either experienced no change or became slightly less costly relative to private securities. Applying the relevant increase or decrease in the yield spread shown in figure 2 to all Treasury bills, notes, bonds, CM bills, and TIPS issued during the 2011 debt limit event period, we estimated that borrowing costs increased by Many of the Treasury securities about $1.3 billion in fiscal year 2011.issued during the 2011 debt limit event period will remain outstanding for years to come. Accordingly, the multiyear increase in borrowing costs arising from the event is greater than the additional borrowing costs during fiscal year 2011 alone. There are limitations to using a multivariate regression to measure changes in Treasury’s borrowing costs attributable to delays in raising the debt limit. Most important, many economic and financial developments besides the uncertainty in the Treasury market arising from delays in raising the debt limit likely affected yield spreads during this period. While we controlled for changes in Federal Reserve holdings of Treasury securities, stock market uncertainty, and economic activity, we cannot capture every development affecting yield spreads, such as other policy changes not easily quantifiable that might affect yield spreads. Debt and cash management required more time and Treasury resources as delays in raising the debt limit occurred in 2011 and January 2012. For example, Treasury staff (1) forecasted and monitored with increasing frequency and in increasing detail cash and borrowing needs and (2) developed, reviewed, and tested contingency plans and alternative scenarios for the possible implementation of extraordinary actions. According to Treasury officials, these activities diverted time and Treasury resources from other cash and debt management responsibilities. We reviewed estimates provided by the Office of Fiscal Projections (OFP) and the Bureau of the Public Debt (BPD), the entities primarily affected by the delays, which indicated that these entities’ personnel devoted as much as several hundred hours per week to managing federal debt when delays in raising the debt limit occurred in 2011 and January 2012. According to Treasury officials, for 2011, Treasury’s operational focus on the debt limit began at least 6 months before the debt limit was expected to be reached and increased as debt neared the limit. Treasury’s OFP staff developed estimates under multiple scenarios of when debt might reach the debt limit. As federal debt neared the debt limit, these estimates were developed weekly, then daily, and finally multiple times a day. According to Treasury officials, preparing these estimates, informing departmental officials, and other preparatory tasks were a critical focus of OFP’s staff. To manage federal debt when delays in raising the debt limit occurred in 2011, Treasury officials estimated that OFP spent almost 15 staff hours per business day performing these tasks. In addition, Treasury officials estimated that OFP expended about 200 staff hours in total to prepare for and manage the extraordinary actions taken in January 2012. BPD—the bureau within Treasury that is responsible for implementing the extraordinary actions and for the accounting associated with those transactions—also dedicated extensive resources to operations related to the debt limit. BPD estimated that managing federal debt when delays in raising the debt limit occurred in 2011 and January 2012 resulted in almost 5,750 hours of work, including over 400 hours of overtime and compensatory time. This included more than 1,200 hours in the weeks prior to the use of extraordinary actions for meetings, preparation of parallel accounts and spreadsheets to use in tracking uninvested principal and interest losses, tests of the accounting system, and training staff. The majority of time was spent implementing the extraordinary actions. BPD estimated that it spent almost 63 staff hours per business day on debt limit–related activities from May 16, 2011, through August 1, 2011, and almost 31 staff hours per business day from January 4, 2012, through January 27, 2012. After the debt limit was increased, BPD estimated that it spent over 500 hours on activities such as restoring uninvested funds and preparing reports. Treasury officials said that the increased focus on debt limit–related operations in the months and weeks approaching the debt limit diverted time and attention from other cash and debt management tasks that could improve Treasury operations. For example, according to Treasury officials, OFP delayed participation in federal cash expenditure process modernization efforts and the development of a new fiscal forecasting system. Similarly, BPD officials said that they spent less time updating procedures for issuing debt to the public and modernizing debt accounting systems. According to these officials, these activities help Treasury more accurately project future borrowing needs and perform debt management activities more effectively. More accurately projecting future borrowing needs helps Treasury avoid (1) borrowing more than is needed to fund the government’s immediate needs, which results in increased interest costs, and (2) borrowing less than is sufficient to maintain Treasury’s operating cash balance at a minimum level through regularly scheduled issuances of marketable Treasury securities, which may require Treasury to issue CM bills with little advance notice to the market, resulting in potentially higher interest costs. Treasury officials also stated that they spent less time on staff development and program oversight activities to perform additional tasks needed to manage federal debt when delays in raising the debt limit occurred. The extraordinary actions Treasury took during 2011 and January 2012 to manage federal debt when delays in raising the debt limit occurred were consistent with relevant authorizing legislation and regulations. However, delays in raising the debt limit can create uncertainty in the Treasury market and lead to higher borrowing costs. We estimated that delays in raising the debt limit in 2011 led to an increase in Treasury’s borrowing costs of about $1.3 billion in fiscal year 2011. However, this does not account for the multiyear effects on increased costs for Treasury securities that will remain outstanding after fiscal year 2011. Further, managing federal debt as such delays occurred was complex, time- consuming, and technically challenging. According to Treasury officials, these events diverted Treasury’s staff away from other important cash and debt management responsibilities as well as staff development and program oversight activities. Congress usually votes on increasing the debt limit after fiscal policy decisions affecting federal borrowing have begun to take effect. This approach to raising the debt limit does not facilitate debate over specific tax or spending proposals and their effect on debt. In February 2011, we reported, and continue to believe, that Congress should consider ways to better link decisions about the debt limit with decisions about spending and revenue to avoid potential disruptions to the Treasury market and to help inform the fiscal policy debate in a timely way. We requested comments on a draft of this report from the Secretary of the Treasury. In providing oral comments on the draft, Treasury broadly agreed with the draft’s conclusions, expressed appreciation for our efforts to estimate the monetary impact of delays in raising the debt limit on Treasury’s borrowing costs, and also commented on the broader impact of delays in raising the debt limit on the economy, which was beyond the scope of our review. Treasury also provided technical comments, which we incorporated as appropriate. We will send copies of this report to interested congressional committees, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gary T. Engel at (202) 512-3406 or [email protected], Susan J. Irving at (202) 512-6806 or [email protected], or Thomas J. McCool at (202) 512- 2642 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. With regard to actions taken by the Department of the Treasury (Treasury) during 2011 and January 2012 to manage federal debt when delays in raising the debt limit occurred, our objectives were to (1) provide a chronology of the significant events, (2) analyze whether actions taken by Treasury were consistent with legal authorities provided to manage federal debt during such delays, (3) assess the extent to which Treasury restored uninvested principal and interest losses to federal government accounts in accordance with relevant legislation, and (4) analyze the effect that delays in raising the debt limit had on Treasury’s borrowing costs and operations. To address the first objective, we reviewed congressional actions increasing the debt limit and Treasury correspondence, announcements, and documentation of the extraordinary actions taken. We reviewed letters sent by the Secretary of the Treasury to Congress requesting debt limit increases and discussing when Treasury’s borrowing authority would be exhausted, and Treasury announcements of specific extraordinary actions. For each business day from May 16, 2011, through August 2, 2011, and January 4, 2012, through January 30, 2012, we reviewed correspondence from Treasury’s Office of Fiscal Projections (OFP) to Treasury’s Bureau of the Public Debt (BPD) providing specific instructions and timing of the extraordinary actions to be taken as well as BPD’s documentation implementing the actions. We performed the work for the second and third objectives as part of our financial audits of the fiscal years 2011 and 2012 Schedules of Federal Debt Managed by BPD. To address the second objective, for each business day during the above-noted periods, we reviewed Treasury accounting documentation, including specific instructions from OFP to BPD, to verify that the extraordinary actions taken for the affected federal government accounts were consistent with relevant legislation. For suspensions of investments, we reviewed BPD documentation and verified that BPD only invested the amount instructed by OFP using the appropriate security type and date. For the one Civil Service Retirement and Disability Fund (CSRDF) security that was redeemed earlier than normal, we reviewed BPD documentation and verified that BPD processed it for the amount, security type, and date as instructed by OFP. For State and Local Government Series (SLGS) securities, we reviewed Treasury documentation of actions taken to suspend new issuances and convert SLGS demand deposit securities and compared those actions taken to authorizing regulations. Over 230 federal government accounts have the authority or the requirement to invest excess receipts in Treasury securities, and Treasury officials stated that normal investment and redemption policies and procedures were used for all but 4 of these accounts for 2011 and 2 of these accounts for January 2012. To evaluate whether Treasury followed normal investment and redemption policies and procedures for federal government accounts not affected by the extraordinary actions, we selected for review accounts with balances greater than $10 billion as of April 30, 2011 (15 accounts) and December 31, 2011 (17 accounts). As of both dates, this represented about 97 percent of the reported total of Treasury securities held by the federal government accounts not affected by the extraordinary actions. We obtained investment and redemption activity files from BPD for these accounts and performed the following audit procedures: Reviewed trends in daily investment and redemption activity and compared these trends to prior year trends to determine whether there were any unusual fluctuations. Selected and reviewed investment and redemption transactions greater than $5 billion from May 16, 2011, through August 1, 2011, and January 4, 2012, through January 27, 2012, to determine whether the transactions were processed in accordance with Treasury’s normal policies and procedures. The selected transactions for the 2011 and 2012 periods represented about 86 percent and 78 percent, respectively, of the total investment transactions, and 81 percent and 80 percent, respectively, of the total redemption transactions. Confirmed with personnel from the respective agencies the total amount of investments and redemptions reported by Treasury from May 16, 2011, through August 1, 2011. We also reviewed Treasury reports of fund balances for federal government accounts with investment authority to identify any large positive uninvested balances, which would indicate that normal policies and procedures were not being followed, as of the end of the month for May through September 2011, December 2011, and January 2012. To address the third objective, we reviewed BPD schedules and parallel accounts of uninvested principal and forgone interest for the CSRDF, Postal Service Retiree Health Benefits Fund, Government Securities Investment Fund of the Federal Employees’ Retirement System, and Exchange Stabilization Fund. We recalculated the cumulative uninvested principal as of August 1, 2011, and January 27, 2012, and compared our calculations to BPD restoration entries. We also recalculated the forgone interest on these uninvested principal amounts and compared our calculations to BPD’s interest restoration entries. We reviewed accounting documentation of Treasury actions to restore uninvested principal and interest and compared these actions to relevant legislation. To address the fourth objective, we performed a multivariate regression analysis of the daily yield spread—yields on private securities minus yields on Treasury securities of comparable maturities—during the 2011 debt limit event period. We used yield spreads during the 3-month pre- event period as a benchmark against which yield spreads during the event period were compared. We also examined changes in the yield spread during the January 2012 debt limit event period. See appendix II for more details on how we estimated increased borrowing costs, including limitations to our using a multivariate regression to measure changes in Treasury’s borrowing costs attributable to delays in raising the debt limit. We obtained Treasury auction data for this analysis from Treasury. We obtained data on security yields, the Federal Reserve’s holdings of Treasury securities, and the Chicago Board Options Exchange’s Volatility Index from the Federal Reserve Bank of St. Louis’s Federal Reserve Economic Data (FRED) source. FRED includes original source data from the Federal Reserve Board, Bank of America Merrill Lynch, the British Bankers Association, and the Chicago Board Options Exchange. We also used data on Standard & Poor’s 500 total return index from IHS Global Insight in our analysis. To assess the reliability of these data, we looked for outliers and anomalies. These databases are commonly used by Treasury and researchers to examine the Treasury market and related transactions. On the basis of our assessment, we believe the data are sufficiently reliable for the purpose of this review. To understand how managing debt affected agency operations when delays in raising the debt limit occurred in 2011 and January 2012, we reviewed documents provided by Treasury, interviewed Treasury officials involved in the decision-making process and implementation of the extraordinary actions, and obtained estimates of the number of personnel and amount of time involved in managing debt during such delays. To assess the reasonableness of Treasury’s estimates, we reviewed e-mails, memos, press releases, written procedures, accounting documentation, and other corroborating information prepared by OFP and BPD. However, we did not obtain sufficient supporting documentation to independently verify Treasury’s staff hour estimates. We were also unable to independently verify the forgone opportunities that Treasury identified, such as less time for other cash and debt management tasks that could improve Treasury operations, in part because it is difficult to prove what would have happened in the absence of the delay in raising the debt limit. We conducted this performance audit from May 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To measure changes in Treasury’s borrowing costs when delays in raising the debt limit occurred in 2011, we performed a multivariate regression analysis of the daily yield spread—yields on private securities minus yields on Treasury securities of comparable maturities—during the debt limit event period. For our purposes, the 2011 debt limit event began with the January 6, 2011, letter from the Secretary of the Treasury notifying the Senate Majority Leader that the debt limit needed to be raised and ended August 1, 2011, the business day prior to the debt limit increase. A basis point is equal to 1/100th of 1 percent. Thus, 11 basis points is 0.11 percent. increased relative to comparable-maturity private securities during the 2011 debt limit event period. The existing literature on the effect of the debt limit on Treasury’s borrowing costs is limited. Previous analysis has focused mainly on the effect of debt limit events on short-term Treasury interest rates. In an analysis we replicated and updated, Liu, Shao, and Yeager (2009) that during debt limit events in 2001-2002 and 2002-2003, the spread between 3-month Treasury bill yields and 3-month commercial paper yields narrowed, implying that Treasury bills were relatively more costly during this period; however, this relationship was not observed in either the 2004-2005 or 2005-2006 debt limit events. The authors hypothesized that during these latter two debt limit events, investors may have assumed based on past experience that Members of Congress would resolve their differences before there were any serious disruptions in the Treasury market and therefore did not charge a premium on securities issued during the debt limit event. Our 2011 report replicated the authors’ analysis and also found that the 2009-2010 debt limit event coincided with a 4 basis point increase in 3-month Treasury bill yields. An earlier study by Nippani, Liu, and Schulman found that Treasury paid a premium on 3- month and 6-month Treasury bills issued during the debt limit event in 1995-1996. Pu Liu, Yingying Shao, and Timothy J. Yeager, “Did the repeated debt ceiling controversies embed default risk in U.S. Treasury securities?” Journal of Banking and Finance, vol. 33 (8) (2009): 1464-1471. On the basis of our analysis, we estimated that delays in raising the debt limit in 2011 led to an increase in Treasury’s borrowing costs of about $1.3 billion in fiscal year 2011. We derived this estimate by multiplying the amount of Treasury securities issued at each maturity during the event period by regression-based estimates of the relevant yield spread change attributable to the debt limit event and weighting the result by the portion of fiscal year 2011 during which the security was outstanding. Many of the Treasury securities issued during the 2011 debt limit event will remain outstanding for years to come. Accordingly, the multiyear increase in borrowing costs arising from the event is greater than the additional borrowing costs during fiscal year 2011 alone. There are limitations to using a multivariate regression to measure changes in Treasury’s borrowing costs attributable to delays in raising the debt limit. Most important, many economic and financial developments besides the uncertainty in the Treasury market arising from delays in raising the debt limit likely affected yield spreads during this period. While we controlled for changes in Federal Reserve holdings of Treasury securities, financial market uncertainty, and economic activity, we cannot capture every development affecting yield spreads, such as other policy changes that are not easily quantifiable that might affect yield spreads. In addition to the contacts named above, Richard S. Krashevski, Dawn B. Simpson, and Melissa A. Wolf, Assistant Directors; Carolyn M. Voltz, Analyst-in-Charge; Nicole X. Dow; Brian S. Harechmak; Dervla Carmen Harris; Thomas J. McCabe; and Shaundell A. Williams made key contributions to this report. | GAO previously examined challenges associated with managing cash and debt when delays in raising the debt limit occurred, focusing on the period from 1995 through 2010. In February 2011, GAO reported that delays in raising the debt limit create debt and cash challenges for Treasury, and these challenges have been exacerbated in recent years by a large growth in debt. Delays in raising the debt limit occurred during 2011 and January 2012. GAO has prepared this report because of the nature of, and sensitivity toward, actions taken to manage federal debt during such delays. With regard to actions taken by Treasury during 2011 and January 2012 to manage federal debt when delays in raising the debt limit occurred, this report provides (1) a chronology of the significant events, (2) an analysis of whether actions taken by Treasury were consistent with legal authorities provided to manage federal debt during such delays, (3) an assessment of the extent to which Treasury restored uninvested principal and interest losses to federal government accounts in accordance with relevant legislation, and (4) an analysis of the effect that delays in raising the debt limit had on Treasurys borrowing costs and operations. To address these objectives, GAO reviewed Treasury correspondence and other documentation, analyzed Treasury and private security yield data, and interviewed Treasury officials. In commenting on GAOs draft report, Treasury broadly agreed with GAOs conclusions and provided technical comments, which GAO incorporated as appropriate. On August 2, 2011, Congress and the President enacted the Budget Control Act of 2011, which established a process that increased the debt limit to its current level of $16.4 trillion through incremental increases effective on August 2, 2011; after close of business on September 21, 2011; and after close of business on January 27, 2012. Delays in raising the debt limit occurred prior to the August 2011 and January 2012 increases, with the Department of the Treasury (Treasury) deviating from its normal debt management operations and taking a number of actions, referred to by Treasury as extraordinary actions, to avoid exceeding the debt limit. The extraordinary actions Treasury took during 2011 and January 2012 to manage federal debt when delays in raising the debt limit occurred were consistent with relevant legislation and regulations. For 2011, these actions included suspending investments of the Civil Service Retirement and Disability Fund (CSRDF), the Postal Service Retiree Health Benefits Fund (Postal Benefits Fund), the Government Securities Investment Fund of the Federal Employees Retirement System (G-Fund), and the Exchange Stabilization Fund (ESF), and redeeming certain investments held by the CSRDF earlier than normal. For January 2012, Treasury suspended investments to the G-Fund and ESF. In accordance with relevant legislation, Treasury restored the uninvested principal and interest losses for 2011 and January 2012 to the CSRDF, Postal Benefits Fund, and G-Fund. Treasury also invested the uninvested principal for 2011 and January 2012 to the ESF. However, Treasury did not restore interest losses to the ESF because it lacks legislative authority to do so. Delays in raising the debt limit can create uncertainty in the Treasury market and lead to higher Treasury borrowing costs. GAO estimated that delays in raising the debt limit in 2011 led to an increase in Treasurys borrowing costs of about $1.3 billion in fiscal year 2011. However, this does not account for the multiyear effects on increased costs for Treasury securities that will remain outstanding after fiscal year 2011. Further, according to Treasury officials, the increased focus on debt limit-related operations as such delays occurred required more time and Treasury resources and diverted Treasurys staff away from other important cash and debt management responsibilities. The debt limit does not restrict Congresss ability to enact spending and revenue legislation that affects the level of debt or otherwise constrains fiscal policy; it restricts Treasurys authority to borrow to finance the decisions already enacted by Congress and the President. Congress also usually votes on increasing the debt limit after fiscal policy decisions affecting federal borrowing have begun to take effect. This approach to raising the debt limit does not facilitate debate over specific tax or spending proposals and their effect on debt. In February 2011, GAO reported, and continues to believe, that Congress should consider ways to better link decisions about the debt limit with decisions about spending and revenue to avoid potential disruptions to the Treasury market and to help inform the fiscal policy debate in a timely way. |
Federal contracts involve considerable dollars, resulting in employment for many workers. GSA’s data show that federal contracts in fiscal year 1993 totaled about $182 billion. Approximately 22 percent of the labor force, 26 million workers, is employed by federal contractors and subcontractors, according to fiscal year 1993 estimates of the Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP). Federal law and an executive order place greater responsibilities on federal contractors compared with other employers in some areas of workplace activity. For example, federal contractors must comply with Executive Order 11246, which requires a contractor to develop an affirmative action program detailing the steps that the contractor will take and has already taken to ensure equal employment opportunity for all workers, regardless of race, color, religion, sex, or national origin. In addition, the Service Contract Act and the Davis-Bacon Act require the payment of area-prevailing wages and benefits on federal contracts in the service and construction industries, respectively. NLRA, as amended, provides the basic framework governing private sector labor-management relations. The act, passed in 1935, created an independent agency, NLRB, to administer and enforce the act. Among other duties, NLRB is responsible for preventing and remedying violations of the act—unfair labor practices (ULPs) committed by employers or unions. NLRB’s functions are divided between its Office of the General Counsel and a five-member board. The Office of the General Counsel, organized into 52 field offices in 33 regions, investigates and prosecutes ULP charges. The Board, appointed by the President with Senate approval, reviews all cases decided by administrative law judges (ALJ) in the regions. Under Section 8 of the act, it is illegal for employers to interfere with workers’ right to organize or bargain collectively or for employers to discriminate in hiring, tenure, or condition of employment in order to discourage membership in any labor organization; and such behavior is defined as a ULP. After concluding that a violation has been committed, the Board typically requires firms to cease and desist the specific conduct for which a ULP is found. The Board may order a variety of remedies, including requiring the firm to reinstate unlawfully fired workers or restore wages and benefits to the bargaining unit. In some cases, the Board will also issue a broad cease and desist order prohibiting the firm from engaging in a range of unlawful conduct. If an employer to whom the federal government owes money (such as a federal contractor) has failed to comply with an order by the Board to restore wages or benefits, the government has the option of withholding from any amount owed to that employer (including payments under a federal contract) any equal or lesser amount that the contractor owes under the Board order. A withholding in this manner is referred to as a collection by administrative offset. In addition to the remedies mentioned above, the Congress has considered debarring from federal contracts firms that have violated NLRA in the past. In 1977, legislation that would have debarred firms from federal contracts for a 3-year period for willfully violating NLRA was introduced but was never enacted. NLRB has several databases that track cases at different stages of processing. One of NLRB’s databases, the Executive Secretary’s database, tracks all cases that go before the Board. Many of these cases were first heard by an ALJ after an investigation by the Office of the General Counsel’s regional staff determined the case had merit. Cases that go before the Board represent only a small percentage of all ULP cases because most cases are withdrawn, dismissed, or informally settled without being reviewed by the Board. None of NLRB’s databases, including the Executive Secretary’s database, contains information as to whether or not violators have federal contracts. GSA maintains the Federal Procurement Data System (FPDS) that tracks firms receiving over $25,000 in federal funding in exchange for goods and services provided. For fiscal year 1993, FPDS tracked information on almost 200,000 contracts totaling about $182 billion, which were awarded to over 57,000 parent firms. FPDS contains a variety of information, including the contractor’s name and location, agency the contract is with, type of industry the contractor is engaged in, and contract dollar amounts awarded. However, FPDS does not contain information on contractors’ labor relations records. Federal contracts are awarded to employers who violate NLRA. A total of 80 firms, receiving over $23 billion from over 4,400 contracts, had both labor violations and contracts. Altogether, about 13 percent of total fiscal year 1993 contracts of $182 billion went to these 80 violators (see fig. 1). However, these contracts were concentrated among only a few violators; six violators received about $21 billion of the more than $23 billion in contracts. These totals are likely an underestimate of the number of violators and contracts they received because of the difficulties involved in the manual matching procedure we used in this analysis. This manual procedure was necessitated by the lack of a corporate identification number for firms in the NLRB case data. Because firms may split up, merge, subcontract, operate subsidiaries, or change names, the same firm might have appeared under different names in NLRB case data and the FPDS and thereby escaped our detection. Also, we were unable to verify those firms that went out of business or relocated or for which location data in NLRB case data or FPDS were incomplete or inaccurate. Each of these six violators, listed below, who together received almost 90 percent of the more than $23 billion in contracts awarded to all violators, received more than $500 million in fiscal year 1993 contracts. (See app. II, fig. II.4.) They are also among the largest federal contractors, ranking in the top 20 firms receiving federal contract dollars. McDonnell Douglas ($7.7 billion), Westinghouse Electric ($4.9 billion), Raytheon ($3.5 billion), United Technologies ($3 billion), American Telephone and Telegraph Company (AT&T) ($1.4 billion), Fluor Corporation ($508 million). In contrast, contract dollars were not as concentrated among all federal contractors. Firms receiving more than $500 million in contracts got about one-half (47 percent) of all federal contract dollars. Of the 88 cases decided by the Board during fiscal years 1993 and 1994 involving federal contractors, the Board found that the firm had interfered with workers’ right to organize, a Section 8(a)(1) violation, in 44 cases. In 45 of the 88 cases, the Board found that a firm had refused to bargain collectively with employee representatives, a Section 8(a)(5) violation. Thirty-three of the 88 cases involved discrimination by a firm in hiring or condition of employment, which is a violation of Section 8(a)(3). Far fewer cases involved other types of violations. (See app. II, fig. II.1.) In 35 of the 88 cases, the Board required firms to reinstate or restore workers as the remedy for violations. In 32 of these 35 cases, firms were ordered to reinstate unlawfully fired workers. In 6 of them, firms were ordered to restore workers who had been subjected to another kind of unfavorable change in job status. An unfavorable change in job status could mean the worker, for example, was suspended, demoted, transferred, or not hired in the first place because of activities for or association with a union. Some cases involved both an order to reinstate fired workers and an order to restore workers who were subjected to another kind of unfavorable change in job status. (See app. II, fig. II.2.) In 44 of the 88 cases, the Board ordered the firm to pay back wages to affected workers. The Board ordered the firm to restore benefits in 28 cases. In most cases, back wages or benefits were owed to individual workers who had been illegally fired or subjected to another kind of unfavorable change in job status. However, in 12 cases, wages or benefits were ordered restored to all workers in the bargaining unit because the firm illegally failed to pay wages or benefits as required under its contract with the union. Some cases involved both a remedy for individual workers owed back wages or benefits as well as the same type of remedy for the entire bargaining unit. (See app. II, fig. II.2.) The Board also ordered other types of remedies in many of these 88 cases. For example, in 33 cases, the Board ordered the firm to bargain with the union. In 24 cases, firms were ordered to stop threatening employees with the loss of the job or the shutdown of the firm. Firms were ordered in 33 cases to stop other kinds of threats, such as interrogating employees and circulating lists of employees associated with the union. To facilitate the bargaining of a contract, the Board ordered firms to provide information to the union in 16 cases. (See app. II, fig. II.3.) Nearly 1,000 individual workers and thousands of additional workers represented in 12 bargaining units were directly affected by violations of the act in these 88 cases. During fiscal years 1993 and 1994, the Board ordered firms to reinstate or restore 761 individual workers to their appropriate job position. These workers had either been fired or experienced another kind of unfavorable change in job status; for example, they were transferred or not hired. These workers are included among those who were paid back wages or had benefits restored. Altogether, 801 individual workers were paid back wages and 462 workers had benefits restored because of Board-ordered remedies. In addition, the Board ordered firms to restore wages and benefits to contract levels for thousands of workers represented in 12 bargaining units. Most of the contracts awarded to violators in fiscal year 1993 came from the Department of Defense and went to firms primarily engaged in manufacturing. The violations occurred in facilities owned or associated with parent firms that typically had more than 10,000 employees or over $1 billion in annual sales. About $17 billion in contracts that went to violators came from the Department of Defense, accounting for 73 percent of such contracts. In addition to Defense, significant contract dollars were awarded to violators by the Department of Energy ($3.7 billion), National Aeronautics and Space Administration ($1.2 billion), and GSA ($702 million). Similarly, these four agencies were the source of most contract dollars (88 percent) to all federal contractors. However, a higher percentage of contract dollars awarded to violators came from the Departments of Defense and Energy as compared with that awarded to all federal contractors from these two agencies. (See app. II, fig. II.5.) Most contract dollars—$15.6 billion or 67 percent—went to violators who were primarily engaged in manufacturing. An examination of more detailed violators’ industry codes shows that the highest percentage of contract dollars in manufacturing went toward the production of aircraft parts, guided missiles, and space vehicles. Although manufacturing is the industry in which most violators are engaged, a significant percentage of contract dollars—25 percent, about $6 billion—went to companies primarily engaged in providing services. As is the case for violators, most contract dollars to all federal contractors went to firms in the manufacturing and services industries. However, a lower percentage of contract dollars to all federal contractors went to manufacturing (47 percent) as compared with violators (67 percent). (See app. II, fig. II.6.) Many violations occurred in facilities owned by firms that had over 10,000 employees or $1 billion in annual sales as of fiscal year 1994. Of the 77 violators for which data on workforce size were available, 35 had more than 10,000 employees. By contrast, only 22 violators had 500 or fewer employees and still fewer (5) were so small as to have 25 or fewer employees. For those 64 violators for which annual sales information was available, 32 had more than $1 billion in sales annually. Ten firms had annual sales greater than $10 billion. (See app. II, figs. II.7 and II.8.) Violations of NLRA vary in their severity. Given this variation, we identified 15 firms that might be considered more serious violators using criteria we developed based on our review of Board decisions. These firms meet one or more of the criteria listed below: Received a comprehensive Board-ordered remedy. We considered a remedy to be comprehensive if the firm received a broad cease and desist order or a Gissel bargaining order, or was ordered to cease and desist 10 or more types of unlawful actions against workers. Took actions affecting the job status of more than 20 workers. Had a history of labor law violations. We identified a total of 12 of the 15 firms as serious violators because the Board-ordered remedy was comprehensive relative to remedies in other cases. This included four firms that received a broad cease and desist order. Cease and desist orders are typically narrow in that they prohibit continuation of the specific conduct found to be unlawful. However, in some cases, the Board issues a broad cease and desist order prohibiting the firm from engaging in a range of unlawful conduct. This may occur when a firm has demonstrated a proclivity to violate the act or when there has been widespread or egregious misconduct. The Board may also issue a broad cease and desist order to cover all of an employer’s facilities or those facilities where a union has jurisdiction if there has been a pattern or practice of unlawful conduct. Also among the 12 firms whose Board-ordered remedy was more comprehensive are two firms that received a Gissel bargaining order. The Board imposes a Gissel bargaining order as an extraordinary remedy when the firm has committed ULPs that have made the holding of a fair election unlikely or that have undermined the union’s majority and caused an election to be set aside. Also among the firms whose Board-ordered remedy was more comprehensive, we included 10 firms ordered to cease and desist 10 or more types of unlawful actions against workers. Although these cease and desist orders were narrow, the relatively high number of unlawful actions listed in the Board decision suggest that the firm may be a more serious violator. Examples of violators whose Board-ordered remedies were comprehensive relative to remedies in other cases include Monfort of Colorado, Inc., a meat processing firm, which received a broad cease and desist order because of ULPs committed at its facility in Greeley, Colorado. Monfort of Colorado, Inc., was found by the Board to have discriminated against 258 former union employees by applying more rigorous hiring criteria and taking numerous actions against employees to discourage union activity. Waste Management, Inc. (Salt Lake Division), a firm engaged in waste pickup and disposal, received a bargaining order in addition to a broad cease and desist order. The firm had taken numerous actions against employees in a West Jordan, Utah, facility to discourage union activity and created employer-dominated committees during a union organizing drive that it then dissolved after the union lost the election. The Board ordered a Tyson Foods, Inc., facility in Dardanelle, Arkansas, that engaged in poultry processing, to cease and desist 10 or more types of unlawful actions against workers, including “directing, controlling, circulating, and assisting in the circulation of a petition” to decertify a union. Firms were also considered to be serious violators if their violations affected the job status of more than 20 individual workers, which was true for four firms. These workers had either been unlawfully fired or subjected to some other unfavorable change in their job status; for example, not hired in the first place because of activities for or association with a union. For example, Caterair International, a firm that caters food for commercial airlines, was ordered to reinstate 289 workers who were permanently replaced when they lawfully went on strike at three facilities in Los Angeles to protest ULPs committed by the firm. Fluor Daniel, Inc., a general contractor in the construction business, was ordered to hire 53 applicants who the firm discriminatorily refused to hire at several facilities in Kentucky because of their union affiliation. In addition, the Board ordered Fluor Daniel, Inc., to reinstate another employee who was fired because he refused to cross a picket line. Another criterion that could identify a serious violator is whether or not the firm has a history of labor law violations. Although we were unable to systematically determine the labor relations record for each of the 80 violators, we were able to determine which of the 15 firms that we had already identified as serious violators also had a history of violations. Five of the 15 serious violators had a history of labor law violations, and 3 firms (Beverly Enterprises; Monfort of Colorado, Inc.; and Overnite Transportation Co.) had several prior Board decisions against them.Monfort of Colorado, Inc., for example, received another broad cease and desist order in 1987 for firing two workers because of their union activities at a facility in Grand Island, Nebraska. At this facility, Monfort of Colorado, Inc., was also found to have refused to grant contract-specified wage increases to the bargaining unit, assisted an employer-dominated committee, and promised a bonus to discourage workers’ support for a union. Beverly Enterprises, which operates nursing homes, violated the NLRA in additional facilities before its fiscal year 1993 and 1994 violations. For example, in 1986, the Board ordered Beverly Enterprises to bargain with the union and restore wages and benefits that had been unilaterally changed at a nursing home in Waterloo, Iowa. In 1990, the Board found Overnite Transportation Co., a firm engaged in the interstate transportation of freight, to have unlawfully fired one employee at a facility in Lexington, Kentucky, because he gave testimony at a hearing before an ALJ. In 1982, the Board ordered Overnite Transportation Co. to reinstate a worker who was not recalled because of his union activities at a St. Louis facility. (See app. IV.) Contract payments may be withheld from federal contractors who have failed to comply with a Board order to restore wages or benefits. This means of collection is referred to as an administrative offset. NLRB officials told us that using administrative offset could help NLRB settle with violators more quickly and avoid a lengthy contempt proceeding. Administrative offset could also result in cost savings to NLRB and the government through reduced litigation as well as more timely restitution to workers. However, NLRB has not been able to use administrative offset as widely as it would like because the agency lacks information to identify which violators receive federal contracts. Coordination between NLRB and GSA would be necessary if NLRB is to use administrative offset to enhance NLRB enforcement. Through administrative offset, NLRB could notify a contracting agency to withhold contract dollars to a violator of NLRA if the violator refuses to comply with NLRB’s order in paying back wages or restoring benefits. NLRB officials told us that administrative offset could be particularly helpful to NLRB in its efforts to recover funds owed by smaller companies and companies that are being liquidated or shutting down their operations. NLRB has not been able to use administrative offset as widely as it would like because the agency lacks the information to identify which violators had federal contracts. Currently, NLRB does not use a corporate identification number in any of its databases that could be recognized by GSA to identify violators with federal contracts. NLRB officials, however, told us that they see the importance of some form of identification number and are exploring this matter in their current efforts to develop a new database. The new database is intended to combine data across several databases that NLRB now maintains. It will track a case from the filing of a charge to the issuance of a decision or, when relevant, an appeal. Federal contracts have been awarded to employers who have violated NLRA. We found that 80 firms violated the act and received over $23 billion, about 13 percent of the $182 billion in federal contracts awarded in fiscal year 1993. The Board cases that we examined indicate a range of violations committed and remedies ordered that affect nearly 1,000 individual workers and thousands of additional workers represented in 12 bargaining units. The cases involved 15 firms that might be considered more serious violators based on several criteria, including that the firm received what we considered to be a comprehensive Board-ordered remedy. NLRB’s enforcement of the act could be enhanced by collecting judgments against violators from federal contract awards. Coordination with GSA to identify violators with federal contracts, however, would be necessary if such actions are to be taken. While NLRB officials recognize the importance of being able to identify labor violators who receive federal contracts, they have yet to approach GSA because they did not know the extent to which federal contracts dollars went to violators. We recommend that the NLRB Chairman and General Counsel and the Administrator of GSA develop an information arrangement approach to facilitate the identification of violators who receive federal contracts. We discussed the results of our work with key officials from NLRB and have incorporated their comments where appropriate. These officials generally agreed with our methodology for identifying NLRA violators with federal contracts. They also agreed with our approach to characterizing Board cases, although they did not comment on our criteria to identify serious violators because we developed these criteria from our case review. NLRB officials also agreed with our recommendation for improving compliance of federal contractors with NLRA and told us that they have already begun to act on it. NLRB officials told us they will soon issue written guidance concerning the expanded use of administrative offset, providing NLRB regional offices specific directions for obtaining assistance from GSA in identifying federal contractors. We also discussed the results of our work with GSA officials and have incorporated their comments where appropriate. GSA officials said that they see no major difficulty in coordinating with NLRB to identify which violators receive federal contracts so that contract payments may be withheld through administrative offset. These officials, however, raised concerns that the discussion of debarment as a remedy was inadequate, failing to consider its appropriateness or implementation. We told GSA officials that this report does not explore issues related to how debarment of federal contractors might be implemented. If the Congress determines debarment to be an appropriate response, implementation concerns such as those raised by GSA could be addressed at that time. Additionally, GSA officials suggested that the feasibility of checking firms’ compliance with labor laws as part of the pre-award contract clearance process be explored. We are sending copies of this report to the NLRB Chairman and General Counsel, the Administrator of GSA, the Secretary of Labor, the Director of the Office of Management and Budget, relevant congressional committees, and interested parties. We also will make copies available to others on request. If you or your staff have any questions concerning this report, please call Charlie Jeszeck, Assistant Director, at (202) 512-7036 or Jackie Baker Werth, Project Manager, at (202) 512-7070. Other major contributors include Cheryl Gordon, Wayne Turowski, Ronni Schwartz, and Danah Kozma. We were asked to identify the extent to which violators of NLRA include federal contractors. More specifically, we were asked to identify characteristics associated with (1) these federal contractors and (2) their NLRA violations. In addition, we were asked to identify ways to improve compliance of federal contractors with NLRA. We matched NLRB case data (fiscal years 1993 and 1994) with the database of federal contractors maintained by GSA, referred to as the FPDS (fiscal year 1993); verified by telephone that matched firms had federal contracts; reviewed Board decisions and U.S. Court of Appeals decisions if a Board decision was appealed to identify characteristics of the violations; analyzed the FPDS for characteristics of the contractors; and met with NLRB officials to explore ways to improve compliance of federal contractors with NLRA. To determine which violators of the act were federal contractors, we matched case data from NLRB with FPDS, a database of federal contractors maintained by GSA. No single database at NLRB tracks all cases from the initial charge until their final resolution. Instead, NLRB has several databases that track cases at different stages of processing. We used the Executive Secretary’s database, because it tracks all cases that go before the five-member Board. Many of these cases were first heard by an ALJ after an investigation by the Office of the General Counsel’s regional staff had determined the case had merit. We also used this database because it is small enough to manually match against the much larger FPDS. With fiscal year 1994 data, figure I.1 shows the relatively low percentage (about 4 percent) of over 30,000 closed ULP cases that were reviewed by the Board; virtually all cases are withdrawn, dismissed, or informally settled without being reviewed by the Board. We looked at the Executive Secretary’s database for decisions on ULP cases against firms that were issued by the Board over a 2-year period (fiscal years 1993-94). This came to a total of 1,493 cases. However, the violation itself may have been committed more than a year before the Board decision because of the length of time it takes for cases to be processed by NLRB. To obtain data on federal contractors, we used FPDS, which tracks business entities receiving over $25,000 in federal funding in exchange for goods and services provided. After determining that it would have been too cumbersome to use more than one year of FPDS data, we selected fiscal year 1993 because this was the most current data available at the time we initiated this review. The FPDS for 1993 alone contained almost 200,000 contracts and 500,000 contract actions, tracking about $182 billion in federal contracts received by over 57,000 parent firms. Because any violation may have been committed more than a year before the Board decision, firms we identified as violators per Board decisions issued in fiscal years 1993 and 1994 may not have been receiving federal contracts at the same time they committed the violation. Because the NLRB databases did not use corporate identification numbers, this precluded an automated matching procedure, and we had to manually match these data. We manually compared each firm name in the smaller Executive Secretary’s database with the larger FPDS, identifying those firm names that were identical or nearly identical. Because firms may split up, merge, subcontract, operate subsidiaries, or change names, the same firm might have appeared under different names in the Executive Secretary’s database and the FPDS and thereby escaped our detection. Through manually matching the databases, we identified 162 firm names that were identical or nearly identical, involving 176 cases because some firms had more than one case. We eliminated all but one-half of these cases (88 cases) involving 80 as firms with both violations and federal contracts. This represents 6 percent of the 1,493 cases decided by the Board during fiscal year 1993 and 1994. How cases were eliminated is described below. (See fig. I.2.) Not a violator per Board decision review Violators receiving federal contracts (88 cases) Not a federal contractor per telephone verification Out of business/relocated per telephone verification Twelve percent of cases were eliminated because the firm went out of business or relocated. This category includes firms for which location information in the Executive Secretary’s database or FPDS was incomplete or inaccurate. Eleven percent of cases were eliminated because the telephone verification revealed the firm listed in the Executive Secretary’s database was not the same firm as listed in FPDS. In addition to the likely underestimation caused by the manual matching procedure, other factors limit the number of NLRA violations detected regardless of whether or not they involve federal contractors. The cases we examined represent violations only over a 2-year period. Further, NLRA is different from most federal statutes in that, rather than imposing regulatory requirements on firms or defining benefits for workers, it establishes rights and obligations of firms, workers, and unions with respect to collective bargaining. Neither the Board nor the General Counsel of NLRB has the authority to investigate alleged ULPs on its own initiative. The filing of a ULP charge by employees or their representatives is necessary before an investigation can be initiated, yet some workers may be unaware of their rights or choose not to exercise them. As already mentioned, many firms are involved in cases that are withdrawn or settled and our analyses do not include such cases in assessing violations committed, remedies ordered, and number of workers affected. To ensure that the firm listed in the Executive Secretary’s database was the same firm listed in FPDS, we telephoned the firm at the location where the labor violation occurred. We verified that the firm name and location identified in both databases referred to the same firm. We eliminated from our matched firms those for which the telephone call revealed that the firm listed in the Executive Secretary’s database was not the same firm as listed in FPDS (11 percent of the 176 cases). We also eliminated those firms we were unable to verify because they went out of business or relocated or location information in the Executive Secretary’s database or FPDS was either incomplete or inaccurate. That 12 percent of cases were eliminated for the latter reason contributes to our suggestion that we may be underestimating the number of firms that are violators. (See fig. I.2.) All 80 firms included among our final group of violators were verified by telephone, except for one firm that refused to verify that it had federal contracts. However, we included it among our matched firms because both company name and location in the Executive Secretary’s database matched exactly with FPDS. We next reviewed the Board decisions on these matched firms to determine whether the firm was a violator as well as to analyze characteristics associated with the violations. We also reviewed all U.S. Court of Appeals decisions for all of our matched firms when appropriate so that modifications to the Board’s decision were reflected in our analysis. We eliminated from this analysis those matched firms that (1) reached a formal settlement with NLRB, (2) prevailed either in the Board decision or subsequently in the U.S. Court of Appeals, or (3) had a Board decision that was only a ruling on a motion by the firm or NLRB. For example, a firm might have filed a motion for dismissal of the case. All together, 27 percent of the 176 cases were eliminated after reviewing the Board decisions and appeals cases. (See fig. I.2.) Our review of Board cases revealed the range of violations and remedies. We categorized each case by type of violation and remedy, as well as number of workers affected if this information was contained in the Board decision. This review also helped us develop criteria to identify firms that might be considered more serious violators. In order to identify firms with a history of labor violations, we requested NLRB staff to search for Board decisions issued before fiscal years 1993 and 1994 involving the 15 firms we had identified as serious violators. Limitations of NLRB’s databases made a comprehensive search for recidivists among all 80 firms too time-consuming to complete during this assignment. These data limitations also precluded NLRB from providing a complete history even for those violators we identified as serious. We reviewed the Board decisions provided by NLRB staff and checked to determine if these cases had been modified by the U.S. Court of Appeals. We analyzed FPDS for characteristics of federal contractors that we found to have violations. For those matched firms that we had verified, we used variations of the firm name as they appeared in FPDS and corresponding corporate identification codes to retrieve all contracts for fiscal year 1993. We found that it was necessary to report contract data for violators at the parent firm level because (1) the location where the violation occurred did not necessarily appear in FPDS and (2) there was no way to determine which contractor establishment code (CEC) was associated with that location. However, to ensure that all contract information retrieved using GSA’s corporate identification codes went to that parent firm, we checked that the names of divisions, plants, and subsidiaries that were retrieved were, in fact, affiliated with the parent firm. Using FPDS, we identified characteristics of the federal contracts. We examined, for example, total contract dollars that went to each violator, federal agencies that they contracted with, the industry in which the firms are engaged, and the products and services these firms provided. Total contract dollars would not include dollars that may have been awarded by primary contractors to subcontractors with violations because we could not identify these subcontractors. FPDS classifies the type of industry the firm is engaged in using standard industrial classification (SIC) codes, a federal classification system. To capture what products and services a federal contractor provides, FPDS includes product and services codes. In addition to major categories, more detailed codes are available under both the SIC and product and services coding systems. These detailed codes were useful to our efforts in developing key contract information by individual firm, as reported in appendix III. We also compared contract data for violators on many of these characteristics with all federal contractors. Although workforce size and annual sales data were not included on the version of FPDS we used in this review, GSA provided these data at our request. These data are current as of fiscal year 1994. Because GSA did not have data for all 80 firms, we filled in missing data on workforce size with information gathered during our telephone calls to verify matched firms. The firms provided the most current data available, typically fiscal year 1995 data. Of the 80 firms, we had data on workforce size for 77 firms and annual sales for 64 firms. To explore ways to improve compliance of federal contractors with NLRA, we met with NLRB officials in its Division of Enforcement Litigation. We also met with computer and technical staff in NLRB headquarters and in its Philadelphia and San Francisco regional offices. We conducted our work between August 1994 and September 1995 in accordance with generally accepted government auditing standards. The following figures illustrate the types of violations committed, remedies ordered, and characteristics of federal contractors that violated NLRA as reflected in Board decisions issued during fiscal years 1993 and 1994. Eighty-eight NLRB cases involved 80 firms (some with more than one case) with both NLRA violations and federal contracts. In reporting on characteristics of federal contractors, including contract dollars received, we are referring to the parent firm. The violations may have occurred at only one site or facility, possibly within a division or subsidiary of the parent firm. Only fiscal year 1993 contract data from FPDS are reported. Figures II.1 through II.3 present data on types of violations committed and remedies ordered. 8(a)(1) 8(a)(3) 8(a)(5) Section 8(a) provides that it is a violation or a ULP for an employer to (1)interfere with, restrain, or coerce employees in the exercise of their rights to self-organize; (3)discriminate in hiring or any term or condition of employment to encourage or discourage membership in any labor organization; or (5)refuse to bargain collectively with the majority representative of employees. Figures II.4 through II.8 present data on the federal contractors that violated NLRA. Some of these figures compare the characteristics of the violators with all firms that contract with the federal government. This appendix provides information on the 80 firms that have both NLRA violations and federal contracts. In reporting fiscal year 1993 contract dollars as well as primary contract agency and products and services, we are referring to the parent firm, which is identified if it is different than the name of the violator. A violation may have occurred at only one site or facility, possibly within a division or subsidiary of the parent firm. These violations apply to cases decided by the Board during fiscal years 1993 and 1994. We did not verify information on primary contract agency and products and services that is entered into FPDS by the contracting agency. Contracts with the Air Force to provide airframe structural components ($354,000). Highlights from Board decision include: Laid off worker during an organizing campaign based upon its suspicion of worker’s union activities. Discharged a second worker because she refused to fabricate evidence in order to establish a “sham” defense for the unlawful layoff of the first worker. Incidents occurred in Wichita, Kansas, where firm is based. Contracts with the Air Force to provide research and development for space science ($303,185,000). Highlights from Board decision include: Refused to furnish information to union in connection with grievance filed by an employee. Incidents occurred in El Segundo, California, where the firm is based. Contracts with Defense Information Systems, GSA, and the Navy to provide telephone and/or communication services ($1,430,462,000). Highlights from Board decision include: Refused to provide information to union about subcontracting some of its work. Incident occurred at a facility in Oak Park, Michigan, at one of its divisions. Contracts with the Tennessee Valley Authority to provide maintenance services ($120,000). Highlights from the Board decision include: Abolished the jobs of 23 workers who would not abandon their economic strike, then put these workers on probation. Incidents occurred at a facility in Diablo Canyon, California. Contracts with the Department of Veterans Affairs to provide medical and surgical instruments, equipment, and supplies ($14,846,000). Highlights from Board decision include: Discharged two workers because of union activity. Incident occurred at a facility in Marion, North Carolina. Contracts with the U.S. Army Corps of Engineers to provide ship repair ($5,621,000). Highlights from Board decision include: Discharged four workers and took numerous unlawful actions to discourage union activity during a lawful ULP strike. Prohibited workers from distributing prounion materials in nonworking areas, solicited workers to sign a petition to oust the union, and threatened retaliation against union workers. Refused to bargain with union. Incidents occurred at a facility in Shreveport, Louisiana. Contracts with the Army to provide guided missile equipment ($3,509,586,000). Highlights of Board decision include: Refused to bargain with union following union certification. Incident occurred at a facility in Pensacola, Florida. Contracts with the Department of Veterans Affairs to provide nursing home care ($10,268,000). Highlights of Board decision include: Discharged 16 workers during union organizing campaigns at 23 facilities throughout the nation. Took numerous unlawful actions to thwart union activity, including threatening discipline against workers for union activity. History of violations. Contracts with the Navy for maintenance and repair of ships, small craft, or docks and Maritime Administration for ship repair ($2,458,000). Highlights from Board decision include: Refused to recognize union. Did not honor collective bargaining agreement. Incidents occurred in Portland, Oregon, where the firm is based. Contracts with the Department of Veterans Affairs to provide nursing home care ($3,008,000). Highlights from Board decision include: Prohibited organizational activities for any group on the company premises without permission. Incident occurred in Millersburg, Ohio, where the firm is based. Contracts with the Army to provide food services ($133,000). Highlights from Board decision include: Discharged 289 workers by permanently replacing them during a lawful ULP strike. Circulated union decertification petitions, promising workers economic benefits if the union is decertified and threatening them with economic harm if the union is not decertified. Incidents occurred at three facilities in Los Angeles, California. Contracts with the Department of Veterans Affairs to provide nursing home care ($4,000). Highlights from Board decision include: Refused to bargain in good faith with union by refusing to implement mandated wage increases. Incident occurred in Centerville, Massachusetts. Contracts with the Defense Logistics Agency to provide petroleum-based liquid propellants ($321,028,000). Highlights from Board decision include: Refused to bargain with union. Incident occurred at a facility in Richmond, California. Contracts with the Air Force for modification of aircraft components, GSA to provide trucks and Army to provide electronic equipment ($369,083,000). Highlights from Board decision include: Refused to bargain with the union. Incident occurred in a facility in Kokomo, Indiana. Contracts with the Navy to provide chemicals ($67,000). Highlights from Board decision include: Failed to pay wages due to 10 workers. Failed to bargain collectively in good faith with union. Did not remit union dues or remit welfare and pension payments to the union’s funds. Incidents occurred in Jersey City, New Jersey, where the firm is based. Contracts with the Navy to provide powered valves ($1,503,000). Highlights from Board decision include: Discriminated against a worker by giving him a negative evaluation, recommending discipline, placing him on probation, denying him a merit pay increase, and suspending him for a week without pay based on the worker’s union activity. Incident occurred at a facility in Perris, California. Contracts with the Navy to provide management support services ($23,356,000). Highlights from Board decision include: Refused to bargain and provide requested information to union necessary to bargain. Firm is based in Yorba Linda, California. Contracts with the Defense Communication Agency to provide maintenance services ($3,874,000). Highlights from Board decision include: Failed and refused to recognize union. Told workers that it would be futile for them to support a union and polled its workers concerning their union membership and support. Firm is based in Virginia Beach, Virginia. Contracts with the Tennessee Valley Authority to provide solid fuels ($25,059,000). Highlights from Board decisions include: Refused to furnish information to the union necessary to bargain with the firm at a facility at Tackett Creek, Tennessee. In a case involving a mine in Pennsylvania, Board stated that the employer has a history of violations and “in at least three cases the Board has recommended broad cease and desist language.” Contracts with the Internal Revenue Service and Army to provide electricity ($8,294,000). Highlights from Board decision include: Bypassed the union bargaining committee and dealt directly with individual workers. Withdrew recognition and refused to bargain with union at one site. Incidents occurred in several Michigan facilities. Contracts with the National Institutes of Health for acquired immunodeficiency syndrome (AIDS) research and Navy for basic defense research ($14,954,000). Highlights from Board decision include: Did not recognize and engage in collective bargaining with the union. University based in Durham, North Carolina. Contracts with the Defense Logistics Agency to provide petroleum-based liquid propellants ($36,207,000). Highlights from Board decisions include: Denied to a worker his right to be represented by union when he was subject to an investigation at a facility in Tonawanda, New York. At a facility in Deepwater, New Jersey, dominated the formation and administration of committees and unilaterally implemented the proposals of these committees without affording the union an opportunity to bargain. Discriminatorily prohibited workers from using the e-mail system for distributing union literature and notices. Contracts with the Department of Agriculture to provide meat, poultry, or fish ($3,499,000). Highlights from Board decisions include: Discharged four workers including a supervisor who refused to commit ULPs and a worker for testifying at an NLRB hearing. Took numerous unlawful actions to discourage union activity including threatening workers with closing the plant if the union was selected. Board implemented a broad cease and desist order as “a result of the widespread and egregious unlawful acts committed by the employer.” Incidents occurred at a facility in Hattiesburg, Mississippi. Contracts with the Navy to provide special shipping and storage containers ($1,208,000). Highlights from Board decision include: Unlawfully delayed issuance of paychecks, payment of health insurance premiums, and payment of fees to 401(k) administrator. Incident occurred in North Tonawanda, New York, where the firm is based. Contracts with the Defense Logistics Agency to provide rechargeable batteries ($1,139,000). Highlights from Board decision include: Threatened a worker with unspecified reprisals because of her union activity. Discriminatorily refused to give above mentioned worker’s daughter part-time work. The firm is based in Reading, Pennsylvania. Contracts with the Bureau of Engraving and Printing to provide fabricated materials ($498,000). Highlights from Board decision include: Terminated two workers because of union activities. The firm is based in Baltimore, Maryland. Contracts with the Air Force to provide household furniture ($306,000). Highlights from Board decisions include: During an organizing campaign, the employer took numerous unlawful actions to discourage union activity, including threatening its workers with more onerous work rates, loss of benefits, discharge, and plant closure if they selected union. Withheld $50 of workers’ pay as simulated union dues, requiring workers to sign a receipt for this money acknowledging their agreement with employer’s antiunion position. Incidents occurred at a facility in New Paris, Indiana. Contracts with the Department of Energy to repair government property ($508,474,000). Highlights from Board decision include: Discriminatorially refused to hire 53 applicants because of their union affiliation. Discharged one worker for refusing to cross a picket line. Board decision stated that “there is strong evidence of union animus in this case.” Incidents occurred at three facilities in Kentucky. Contracts with the Defense Logistics Agency to provide packing and gasket materials ($110,000). Highlights from Board decision include: Threatened workers with plant closure. Subjected worker’s work to increased scrutiny because worker filed an ULP charge. The firm is based in Bridgeport, New Jersey. Contracts with the Navy to provide electronic equipment ($18,693,000). Highlights from Board decision include: Discharged a worker during an organizing campaign because of union activities. Threatened workers with loss of jobs if the union successfully organized the workers. Incidents occurred at a facility in Sevierville, Tennessee. Contracts with the Department of Veterans Affairs to provide nursing home care ($4,674,000). Highlights from Board decision include: Discharged five workers because of union activity. Interrogated a worker about her union activity and that of other workers. Incidents occurred in the Valley View Nursing Home in Lenox, Massachusetts. Contracts with the Department of Veterans Affairs to provide nursing home care ($220,000). Highlights from Board decision include: Ceased to make contractually required payments to the union’s welfare and legal funds, and remit dues and initiation fees to union. Incident occurred in Newark, New Jersey, where the firm is based. Contracts with GSA to provide stationery and record forms ($32,183,000). Highlights from Board decision include: Interrogated and harassed workers regarding union activities during an organizing campaign. Incident occurred at a facility in Raleigh, North Carolina. Contracts with the Army to supply bearings ($8,341,000). Highlights from Board decision include: Refused to supply the union with requested information necessary to bargain. Incident occurred at a facility in Muskegon, Michigan. Contracts with the Internal Revenue Service to provide data entry services ($5,311,000). Highlights from Board decision include: Refused to bargain with the union. Incident occurred at a facility in Beckley, West Virginia. Contracts with the Defense Logistics Agency to remove hazardous waste and the Department of the Army to maintain or repair structures ($55,711,000). Highlights from Board decision include: Refused to reinstate five workers to their prestrike positions when they offered to return to work. Incident occurred at facilities in Sandy and Pleasant Grove, Utah. Contracts with the U.S. Army Corps of Engineers to construct dams and other facilities ($15,967,000). Highlights from Board decision include: Discharged three workers for their union activities. Opposed the union’s attempt to organize and represent workers by threatening workers with a reduction in wages if they selected the union and intimidating workers in other ways. The Board issued a bargaining order because the “coercive and discriminatory conduct has interfered with the holding of a fair and free election.” The firm is based in Meriden, Connecticut. Contracts with GSA to provide maintenance or repairs of mechanical equipment ($7,986,000). Highlights from Board decision include: Refused to hire 13 incumbent workers for a new work location because these workers were represented by a union. Incident occurred at several facilities in New York. Contracts with the Air Force to provide electrical and electronic equipment ($2,981,000). Highlights from Board decision include: Refused to provide the union with requested information necessary to bargain and to recognize the union. Incident occurred at facilities in Los Angeles and Buena Park, California. Contracts with the Federal Aviation Administration to provide custodial and janitorial services ($161,000). Highlights from Board decision include: Failed to bargain in good faith with the union. Incident occurred at a facility in Aurora, Colorado. Contracts with GSA for telephone and/or communication services ($229,952,000). Highlights from Board decision include: Refused to approve posting of union-related material on a bulletin board. Incident occurred at a facility in Winona, Minnesota. Contracts with the National Institutes of Health to provide biomedical research ($448,000). Highlights from Board decision include: Threatened workers with unspecified reprisals if they engaged in protected concerted activity. The medical center is located in Maywood, Illinois. Contracts with the Air Force to provide building maintenance services ($7,066,000). Highlights from Board decisions include: Suspended and then discharged three workers in a facility in Palisades, New York, during an organizing campaign because of union involvement. Promised improved wages, benefits, and remedies to grievances to discourage worker support for union. Prohibited workers from wearing union insignia at a facility in Los Angeles, California. Contracts with the Navy and Air Force to provide aircraft ($7,654,628,000). Highlights from Board decision include: Transferred 32 workers out of the bargaining unit at a facility in Huntington Beach, California, without obtaining the union’s agreement. Very recently, the U.S. Court of Appeals (D.C. Circuit, September 13, 1995) remanded the NLRB cases against McDonnell Douglas Corporation to the Board. The U.S. Court of Appeals asked the Board to reconsider its decision. The Board’s additional review could affect McDonnell Douglas Corporation’s classification as a labor law violator. Contracts with the U.S. Army Corps of Engineers to provide plastics and other fabricated materials ($148,000). Highlights from Board decision include: Refused to execute the agreement made with union. Incident occurred in Chicago, Illinois, where the firm is based. Contracts with the Defense Logistics Agency to provide meat, poultry, or fish ($48,000). Highlights from Board decision include: Unilaterally changed the wages, hours, and other terms and conditions of employment during an organizing campaign. Threatened workers with plant closure if union succeeded in organizing workers. Incidents occurred in Omaha, Nebraska, where the firm is based. Contracts with the Air Force to provide automated data processing input, output, and storage devices ($3,938,000). Highlights from Board decision include: Refused to bargain with the union. Unilaterally implemented a new attendance policy without negotiating with union. Incidents occurred in Sidney, Ohio, where the firm is based. Contracts with the Department of Agriculture to provide meat, poultry, or fish ($117,414,000). Highlights from Board decision include: Discriminated against 258 former union workers by applying more rigorous hiring criteria when plant reopened in Greeley, Colorado. Took numerous unlawful actions against workers to discourage union activity, including threatening to close plant. Board issued a broad cease and desist order because of the nature and extent of violations. Election set-aside because of firm’s conduct. Contracts with the Department of the Army to provide ADP software ($3,828,000). Highlights from Board decisions include: Prohibited workers from either distributing or posting union literature during an organizing campaign. Created, dominated, assisted, and interfered with worker satisfaction councils. Incidents occurred in Dayton, Ohio, where the firm is based. Contracts with the Department of the Army to provide telephone and communication services ($3,391,000). Highlights from Board decision include: Failed and refused to supply the union with requested information necessary to bargain. The firm is based in Boston, Massachusetts. Contracts with the Department of Agriculture to provide meat, poultry, or fish ($117,414,000). Highlights from Board decision include: Refused to bargain with the union. Incident occurred at a facility in Edgar, Wisconsin. Although NCR Corporation is a subsidiary of AT&T, we listed it separately both because violations were committed within each firm and contract dollars to NCR could be identified separately from those going to AT&T. Contracts with the Department of Agriculture to provide meat, poultry, or fish ($19,721,000). Highlights from Board decision include: Refused to bargain with union and to furnish information necessary to bargain. Incident occurred at a facility in Plover, Wisconsin. Contracts with the Department of Agriculture to provide candy and nuts ($14,579,000). Highlights from Board decision include: Discharged three workers during an organizing campaign because of union activity. Took numerous unlawful actions to discourage workers from union activity during an organizing campaign including “coercively” interrogating workers. Incidents occurred at a facility in Lexington, Kentucky. Contracts with the Air Force to provide gas turbines and jet engines and with the Navy to provide aircraft rotary wings ($3,008,796,000). Highlights from Board decision include: Failed to honor the commitments it gave the union toward resolving numerous job design grievances. Incident occurred at a facility in Middletown, Connecticut. Contracts with the Air Force to provide electricity ($12,424,000). Highlights from Board decision include: Failed to process valid dues-checkoff authorization for three unit workers. Unilaterally subcontracted out work without first notifying union and giving union opportunity to bargain. Failed to give union timely notice of workers’ status from temporary to regular worker for 6 months. The utility is based in Denver, Colorado. Contracts with GSA to provide electricity ($79,000). Highlights from Board decision include: Issued a disciplinary warning to one worker because he was the union shop steward. The utility is based in Tulsa, Oklahoma. Contracts with the Defense Logistics Agency to provide packaging and packing bulk materials and with GSA for office building lease ($5,611,000). Highlights from Board decision include: Discriminated against the terms and conditions of employment by permitting a former full-time worker (who took time off to be a full-time union president) to retroactively bid and obtain a job position thereby displacing two other workers who had departmental seniority. Incident occurred at facility in Louisville, Kentucky. Contracts with GSA to provide space heating equipment and water heaters ($106,000). Highlights from Board decision include: Refused to bargain with union following its certification. Incident occurred at a facility in Milledgeville, Georgia. Contracts with the Department of Agriculture to provide dairy foods and eggs ($1,290,000). Highlights from Board decision include: Told workers that the union steward was filing frivolous grievances. Interrogated a worker about whether the worker had filed a grievance. Told a worker that there was a “militant union segment” in the store. Incidents occurred in Lakewood, Colorado. Contracts with the Defense Logistics Agency to provide petroleum-based liquid propellants ($315,957,000). Highlights from Board decision include: Declared an impasse with the union, then unilaterally implemented changes in the wages, hours, and other terms of conditions including laying off or terminating seven workers. Refused to supply the union with information necessary to bargain. Unilaterally subcontracted out bargaining unit work. Incidents occurred in Puerto, Rico. Contracts with the Defense Logistics Agency to provide chemicals ($1,426,000). Highlights from Board decision include: Discharged a worker because of his union activity. Incident occurred at a facility in Emeryville, California. Contracts with the Navy to provide fiber optic cables and other cable wires ($17,023,000). Highlights from Board decision include: Discharged a worker because of union activity. Prohibited workers from engaging in union activities. Incidents occurred at a facility in Newington, New Hampshire. Contracts with GSA to provide paper and paperboard ($266,000). Highlights from Board decision include: Prohibited workers from wearing anti-employer signs or articles of clothing. Incident occurred at a facility in Anderson, California. Contracts with the Navy to provide gas turbines and jet engines ($4,627,000). Highlights from Board decision include: Laid off 11 workers due to union organizing activity. Threatened workers with layoff and discharge. Incidents occurred in South Glastonbury, Connecticut, where the firm is based. Contracts with the Army to provide vehicle brake components ($102,000). Highlights from Board decision include: Threatened workers with plant closure in the event of unionization. Board ordered the firm to hold a second union representation election. Incidents occurred at a facility in Somerset, Pennsylvania. Contracts with the Army to provide radio and television equipment ($17,341,000). Highlights from Board decision include: Photographed workers and used these photographs for an antiunion videotape without the workers’ consent. Refused to furnish the union with requested information necessary to bargain. Board ordered the firm to hold a new election when the circumstances permit. Incidents occurred in several facilities in New Jersey and New York. Contracts with the Bureau of Reclamation to provide administrative, mailing, and distribution services ($32,000). Highlights from Board decision include: Solicited worker grievances and promised them that the employer would resolve grievances if they did not select union. Interrogated workers regarding their union activities. The firm is based in Garden City, New York. Contracts with GSA to provide guard services ($793,000). Highlights from Board decision include: Suspended and then discharged three workers because of union activity. Incident occurred at a facility in Newark, New Jersey. Contracts with the Army to provide custodial services and with the Navy to provide food services ($6,349,000). Highlights from Board decision include: Discharged a worker and thereafter refused to reinstate her because of her union activities. Incident occurred in Mount Olive, North Carolina, where firm is based. Contracts with the Department of Veterans Affairs to provide construction and other utilities ($1,755,000). Highlights from Board decision include: Refused to make payments owed to various funds on behalf of employees, including the health and welfare fund and the pension fund. Incident occurred at a facility in Springfield, Massachusetts. Contracts with the Department of Agriculture to provide meat, poultry, or fish ($10,791,000). Highlights from Board decision include: Took numerous unlawful actions to oust the union including circulating a decertification petition. Unilaterally implemented changes in wages and working conditions. Stopped bargaining with the union. Incidents occurred at a facility in Dardanelle, Arizona. Contracts with the Federal Emergency Management Agency to conduct studies ($263,000). Highlights from Board decision include: Refused to bargain with union. Incident occurred at a facility in Santa Clara, California. Contracts with the Air Force to provide air charter services for packages ($88,504,000). Highlights from Board decision include: Refused to furnish the union with requested information necessary to bargain. Incident occurred at a facility in Obetz, Ohio. Contracts with the Department of Defense to provide custodial and other housekeeping services ($512,000). Highlights from Board decision include: Discharged three workers during an organizing campaign because of their support for the union and to discourage membership in the union. Took numerous unlawful actions against workers during an organizing campaign including warning them that its records could be changed to facilitate their discharge if they continued to support the union. Incidents occurred at a facility in Windsor-Locks, Connecticut. Contracts with the Department of Veterans Affairs to provide nursing home care ($14,000). Highlights from Board decision include: The firm took numerous unlawful actions to discourage union activity, including refusing to hire one employee and denying another employee a wage increase because of their union activities. Threatened employees with changes in the terms and conditions of employment because of their union membership. Refused to bargain with the union. The current parent firm is Health Care Facilities, based in Manchester, Connecticut. Contracts with the Department of Energy for the operation of research and development facilities ($224,762,000). Highlights from Board decision include: Discharged a worker and took other numerous unlawful actions to discourage union activity, including threatening workers with loss of current wages and benefits if union were voted in. Created employer-dominated committees during an organizing drive; then dissolved these committees after the union lost and filed objections to the election. Board issued a broad cease and desist order as well as an order to bargain because of the “pervasive and serious nature of misconduct.” Incidents occurred in West Jordan, Utah. Contracts with the Department of Energy for the operation of government industrial buildings ($4,918,087,000). Highlights from Board decision include: Laid off four workers. Failed to bargain with the union. Incidents occurred at a facility in Baltimore, Maryland. Contracts with the Tennessee Valley Authority to provide diesel engines and components ($36,000). Highlights from Board decision include: Discharged a worker because he refused to cross a picket line. Threatened its workers with discipline up to and including discharge if they refused to cross picket line. Incidents occurred at a facility in Evansville, Indiana. Contracts with the Department of Veterans Affairs to provide nursing home care ($1,139,000). Highlights from Board decisions include: Recognized a union as the exclusive representative of workers when this union had not been selected by an “uncoerced” majority of workers. Threatened to discharge workers who did not want to be members of this union and discharged one worker because she refused to pay union dues when she was under no obligation to do so. Board issued a broad cease and desist order because of the “pervasive nature” of the violations. After the workers voted in another union, the employer refused to bargain or furnish requested information to this union necessary to bargain. Incidents occurred at a facility in New Haven, Connecticut. Contracts with the Army to provide architects and general engineering services ($113,000). Highlights from Board decision include: Discriminatorily reduced worker hours and denied holiday, vacation, and sick leave requests during a period of a union’s boycott. Threatened workers during collective bargaining negotiation, and photographed and videotaped workers engaged in union activities. Incidents occurred at a facility in Butte, Montana. Contracts with the Navy to provide office and residential construction services ($44,210,000). Highlights from Board decision include: Interfered with the rights of union representatives to enter job site for the purpose of engaging in lawful union activity. Incident occurred in Tustin, California. Summaries of the cases involving the 15 firms that might be considered more serious violators of NLRA based on our review of Board decisions appear below. These summaries capture information from FPDS on the federal contracts that were awarded to these violators in fiscal year 1993 as well as information from Board decisions issued during fiscal years 1993 and 1994. The violations and remedies are reported as they appear in the Board decisions. Modifications to the violations and remedies if the case was appealed are reflected in these summaries. The firm’s headquarters is in Plymouth, Massachusetts. The violations occurred at the firm’s Diablo Canyon, California, facility. The firm provides radiation protection services to nuclear power plants throughout the United States. During outages, Bartlett Nuclear, Inc., provides temporary workers to utility companies. All of its fiscal year 1993 federal contract dollars ($120,000) are with the Tennessee Valley Authority. This firm violated section 8(a)(3) and (1) of NLRA. The firm abolished the jobs of 23 workers who would not abandon their economic strike and then put these workers on probation for a year. threatened to abolish the jobs of workers if they did not abandon their strike and return to work; took this action against 23 workers; put these 23 workers on probation for a year, which prevented them from obtaining employment assignments for a year; and realizing the legal problems that might ensue, the firm subsequently told 18 of the 23 workers that the probation was rescinded. By that time, however, many of these workers missed out on job opportunities. The firm was ordered to offer each of the affected workers immediate reinstatement to their former positions without loss of seniority and other privileges; and make these workers whole for lost earnings from the date of discharge to the date of a bona fide offer of reinstatement, less net interim earnings, plus interest. The firm’s headquarters is in Shreveport, Louisiana, also where the violations occurred. Sixty-seven percent of its $5.6 million in fiscal year 1993 federal contract dollars ($3.8 million) are with the U.S. Army Corps of Engineers for ship repair. These contract dollars went to its parent firm, Trinity Industries, Inc. The firm violated Section 8(a)(1)(3) and (5) of NLRA. The firm discharged 4 employees and took numerous unlawful actions to discourage union activity. The Board decision states that these violations occurred during the firm’s attempt to “oust” the union during a lawful ULP strike. prohibited its workers from distributing prounion materials; solicited its workers to sign a petition to oust the union as the exclusive promised its workers a pay raise if they would oust the union; promised its workers an increase in work hours if they would oust the promised an employee a test to qualify for a higher grade at higher pay if the employee signed a petition to oust the union; threatened to reduce hours if workers did not oust the union; interrogated workers about union activities; promised more work if its employees ousted the union as the bargaining solicited its employees to encourage other employees to sign the petition to oust the union; implied to its employees that they would receive unspecified benefits if they ousted the union as their bargaining representative; promised increased benefits to nonunion employees; threatened retaliation to union employees and more onerous work for prounion employees if the employees ousted the union as the bargaining representative; discharged 4 workers due to their union activities; transferred one employee from the night shift to the day shift due to union activity; withdrew recognition from and refused to bargain with union by unilaterally removing workers from bargaining unit and reducing work hours without notifying and bargaining with the union; and created vacancies in a substantial percentage of jobs in a particular job classification in the bargaining unit without notifying and bargaining with the union. The firm was ordered to offer the 4 discharged workers immediate and full reinstatement to their former jobs or equivalent positions; make them whole for any loss of earnings and any other benefits, plus recognize, meet, and bargain with the union; restore conditions to the status quo as they existed before illegally withdrawing recognition and make workers whole for any loss of earnings and benefits, plus interest; and accord all striking workers the rights and privileges of ULP strikers, offering strikers not heretofore reinstated immediate and full reinstatement and making them whole for any loss of earnings. This firm is also referred to as Beverly California Corporation. Its headquarters is in Fort Smith, Arkansas, although at the time of the violations the firm was based in Pasadena, California. Beverly Enterprises operates nearly 1,000 nursing home facilities throughout the nation. All of its fiscal year 1993 federal contract dollars ($10.3 million) are with the Department of Veteran Affairs for providing nursing home services. The U.S. Court of Appeals (2nd Circuit, Feb. 28, 1994) modified the Board’s decision. This summary reflects the case after incorporating the Court of Appeals decision. issued less favorable performance evaluations because of their support for or activities on behalf of a union; failed and refused to bargain in good faith with a union selected by a majority of its workers as their collective-bargaining representative; unilaterally implemented changes in terms of conditions of employment of workers without prior notice or affording an opportunity to bargain to the union selected as their collective bargaining representative; failed and refused to meet and bargain with a union representing its employee, on request, with information necessary and relevant to its collective-bargaining functions; failed and refused to meet and bargain with a union representing its workers concerning workers complaints and grievances; and assaulted union representatives or delegates. The firm was ordered to offer full reinstatement to 16 workers (across several different centers) to their former positions or, if those positions no longer exist, to substantially equivalent positions; make whole the 16 workers listed above, the 17 workers unlawfully discharged on September 15, 1986 but later rehired at Fayette Health Care Center, and 4 other workers (of different centers) for any loss of pay and other benefits, with interest; make whole, with interest, those workers at Parkview Gardens Care Center adversely affected by the unlawful implementation of the vacation buy-out program; on request, furnish to the applicable union information that is relevant and necessary to its role as exclusive bargaining representative of the unit workers; on request, bargain in good faith concerning wages, hours, and other terms and conditions of employment with any union selected by its workers as their collective-bargaining representative; and set aside the representation elections at Four Chaplains Convalescent Center and Parkview Manor Nursing Home and have new elections ordered and conducted by the Regional Director for Region 7 and Region 30, respectively, whenever the latter desires it to be appropriate. The firm’s headquarters is in Bethesda, Maryland. The violations occurred at three facilities in Los Angeles, California. The firm caters food for commercial airlines. All of its fiscal year 1993 federal contract dollars ($133,000) are with the Army for the provision of food services. These contract dollars went to its parent firm, Caterair Holdings Corporation. The firm violated section 8(a)(1)(3) and (5) of NLRA. The violations occurred during the firm’s attempt to decertify the union. During a lawful ULP strike, 289 workers were unlawfully discharged by Caterair. The firm then brought in permanent replacements. circulated a petition among workers to decertify the union; promised economic benefits and threatened economic harm; told workers that they were discharged or automatically replaced if they went on strike; failed and refused to reinstate ULP strikers to their former or equivalent positions of employment; withdrew recognition from union and refused to bargain with the union; unilaterally granted a wage increase without bargaining with the union. The firm was ordered to reinstate ULP strikers and pay them back pay, with interest. The violations occurred at its facility in Hattiesburg, Mississippi. The firm is engaged in the processing and nonretail sale of poultry products. Ninety-five percent of its $3.5 million in fiscal year 1993 federal contract dollars ($3.3 million) are with the Department of Agriculture to provide poultry. These contract dollars went to its parent firm, Durbin Marshall Food Corporation. The U.S. Court of Appeals (D.C. Circuit, April 29, 1994) modified the Board’s decision. This summary reflects the case after incorporating the Court of Appeals decision. Four employees were discharged and numerous unlawful actions taken to discourage union activity. For both cases, the firm received a broad cease and desist order, which, as stated in one of the decisions, was for “widespread and egregious unlawful acts committed by the employer.” The U.S. Court of Appeals (5th Circuit, Dec. 16, 1994) modified the Board’s decision. This summary reflects the case after incorporating the Court of Appeals decision. refused to allow one employee to stay in the breakroom or on parking lot premises beyond his working hours, and issued him a written reprimand for remaining on the premises; decreased the hours of two workers; issued written warnings to six workers; assigned more onerous work to and decreased the hours of one employee and harassed her; terminated two workers; and discharged a supervisor because of his refusal to commit ULPs in order to discourage workers from joining, supporting, and assisting the union or engaging in other concerted activities. In case 15CA11528, the firm violated section 8(a)(1) and (4) of NLRA. The firm threatened workers with discharge because they testified in an NLRB hearing; threatened its workers that it was futile for them to select the union; threatened its workers with plant closure if they selected the union; interrogated an employee about his union activities; and discharged and refused to rehire an employee because she gave testimony in an NLRB hearing. In case 15CA11268, the firm was ordered to rescind the unlawful warnings issued to seven workers, rescind its unlawful discharges of two workers and the unlawful transfer and reduction of hours of one other employee and offer them full reinstatement to their former positions and make them whole for all loss of wages and benefits, with interest; make whole the discharged supervisor for his lost earnings and benefits from the date of his discharge until the date on which the employer first learned of his prior misconduct constituting a lawful basis for discharge, with interest; and make whole all workers for all losses of wages and benefits sustained by them, with interest, as a result of the unlawful reduction in work hours. In case 15CA11528, the firm was ordered to offer an employee discharged for testifying at an NLRB hearing immediate and full reinstatement to her former job and make this employee whole for any loss of earnings plus interest. The firm’s headquarters is in Dubuque, Iowa. The violations occurred at the firm’s New Paris, Indiana, facility. The firm manufactures, sells, and distributes recreational vehicle equipment and related products. Eighty-nine percent of its $306,000 in fiscal year 1993 federal contract dollars ($272,000) are with the Air Force for household furniture. The firm violated Section 8(a)1 of NLRA. During an organizing campaign, the firm took numerous unlawful actions to discourage union activities, including threatening its workers with more onerous work rates, loss of benefits, discharge, and plant closure if they selected the union. The firm also withheld $50 of workers’ pay as simulated union dues. promised its employees improved terms and conditions if they would abandon union support; interrogated its employees regarding their union membership, activities, and sympathies; discriminatorily enforced a no-solicitation, no-distribution rule regarding union information; threatened its employees through its supervisors with the elimination of their benefits and requiring the union to bargain from scratch if its employees selected the union as their collective-bargaining representative; gave its employees the impression that their union activities were under withheld $50 of employees’ pay as simulated union dues, fines, and assessments without their authorization, requiring employees to sign a receipt acknowledging their agreement with employer’s position that they would be required to pay at least the amount withheld for union dues, and delaying by a day paying the amount withheld to any employee who refused to sign the receipt; threatened its employees with more onerous work rates and loss of pay if they selected the union; threatened its employees with loss of their jobs or discharge if they selected the union; threatened its employees with plant closure if they selected the union; and issued to its employees written material wherein the firm threatened its employees with loss of work and jobs if they selected the union. The firm was ordered to hold a rerun election for the union at such time deemed that a free choice on the issue of representation can be held. The firm’s headquarters is in Irvine, California. The violations occurred in several facilities in Kentucky. Fluor Daniel, Inc., is engaged in the engineering, construction, and maintenance business throughout the United States. Seventy-two percent of its $508 million in fiscal year 1993 federal contract dollars ($367 million) are with the Department of Energy. These contract dollars went to its parent firm, the Fluor Corporation. In this case, Fluor Daniel entered into a 3-year contract with Big Rivers Electric Company to do service and maintenance work on various power generating facilities operated by Big Rivers. The firm violated section 8(a)(1) and (3) of NLRA. The firm discriminatorily refused to hire 53 applicants because of their union affiliation and discharged one employee for refusing to cross a picket line. The Board decision found there was “strong evidence of union animus” in this case because not one applicant whose application bore the words “voluntary union organizer” was hired, even when their qualifications and job experience were at least equal of those hired. threatened workers with discipline and discharge if they refused to cross a picket line; discharged an employee because the employee refused to cross a picket line; and failed and refused to offer positions to 53 discriminatees because they engaged in the protected concerted activity of letting the employer know they were voluntary union organizers. The firm was ordered to offer the employee who was discharged for refusing to cross a picket line full reinstatement to his former position and offer to the 53 individuals (refused hire because engaged in activities on behalf of a labor organization) employment in positions for which they applied or, if those positions no longer exist, to substantially equivalent positions; and make the above individuals whole for any loss of pay and other benefits suffered by them. The firm’s headquarters is in Meriden, Connecticut. The firm is engaged in construction. Ninety percent of its $16 million in fiscal year 1993 federal contract dollars ($14.4 million) are with the U.S. Army Corps of Engineers to construct dams and other facilities. These contract dollars went to its parent firm, Lane Industries, Inc. The firm violated sections 8(a)(1) and (3) of NLRA. The firm discharged three employees for their union activities and interfered with the union’s attempt to organize in a variety of ways. The Board issued a bargaining order because the “coercive and discriminatory conduct has interfered with the holding of a fair and free election.” threatened workers with a reduction in wages if they selected the union as their bargaining representative; coercively interrogated workers regarding their union sympathies; admonished workers that they were “putting the wood” to the employer and “biting the hand that feeds them” in seeking union representation; discriminatorily laid off three workers; and coercively and discriminatorily interfered with the holding of a fair and free election among the unit workers. The firm was ordered to offer the three discharged workers immediate and full reinstatement to their former jobs or, in the event their former jobs no longer exist, to substantially equivalent jobs; make the above workers whole, with interest, for any loss of earnings they may have suffered by reason of their discriminatory layoffs; and upon request, bargain in good faith with the union as the exclusive bargaining agent of its workers and if an understanding is reached, embody that understanding in a signed agreement. The firm’s headquarters is in Greeley, Colorado, also where the violations occurred. This case involves the firm’s Greeley, Colorado, meat processing facility, which reopened in March 1982 after a 2-year closure. Forty-seven percent of its $117 million in fiscal year 1993 federal contract dollars ($55.7 million) are with the Department of Agriculture. These contract dollars went to its parent firm, ConAgra, Inc. The U.S. Court of Appeals (10th Circuit, May 19, 1992) modified the Board’s decision. This summary reflects the case after incorporating the Court of Appeals decision. pervasive, and outrageous.” The Board also ordered that the election be set aside because of the firm’s conduct. threatened workers that if the union won the election, the employer would settle the outstanding ULP case against the employer, fire the present workers, and rehire the former workers; told workers that, if the union lost the election, the employer would fight vigorously the outstanding ULP case against the employer, even if it took years to do so, before the employer would fire even one present employee in order to rehire a former employee; threatened workers that the plant would be closed if the workers selected the union to represent them; threatened an employee that the workers’ selection of the union as their collective-bargaining representative would cause the Greeley plant to be closed again, and suggested in that context that the workers form their own organization to bargain with the employer instead of selecting the union to represent them; told an employee that workers who voted for the union were a bunch of troublemakers and ought to be fired; threatened an employee that workers would lose their profit-sharing benefits if the workers selected the union as their collective-bargaining representative; threatened an employee with retaliation for revealing statements made by a supervisor of the employer, which had resulted in the union’s filing charges against the employer; told an employee that any employee who would testify against the employer in an NLRB hearing ought to be “shot or abandoned on some island”; promised an employee free workgloves if the employee voted against the union; told an employee to solicit other company workers to sign a petition against the union, in the context of telling the same employee that he would be sure to get a promotion to a leadman’s job; disparately applied its work rules to permit workers to engage in antiunion activities in the plant while not permitting workers to engage in prounion activities; failed to rehire or delayed in rehiring former workers because of their past union membership and activities after they had filed Monfort applications for employment; refused to (re)hire an employee because the union had filed a ULP charge with the NLRB against the employer with regard to the employer’s prior termination of the employee; and applied discriminatory application of facially neutral hiring criteria to disqualify former unionized workers who sought reemployment at the employer’s reopened plant. The firm was ordered to offer an employee immediate and full reinstatement to his former job, or if that job no longer exists, to a substantially equivalent position, and make him whole for any loss of earnings and other benefits suffered as a result of the discrimination against him; offer an employee and those former workers whom it had unlawfully refused to rehire immediate and full reemployment in the positions for which they would have been hired but for the respondent’s unlawful discrimination, or, if these positions no longer exist, to substantially equivalent positions at the respondent’s Greeley, Colorado, plant; make each of them, as well as those former workers whom it has unlawfully delayed in rehiring, whole for any loss of earnings and other benefits resulting from the discrimination against them, and place on a preferential hiring list all remaining discriminatees who would have been hired but for the lack of available jobs; on request of the union made within 1 year of the issuance of the order here, make available to the union without delay a list of names and addresses of all workers employed at the Greeley, Colorado, plant at the time of the request; immediately on request of the union, for a period of 2 years from the date on which the notice is posted or until the regional director has issued an appropriate certification following a fair and free election, whichever comes first, grant the union and its representatives reasonable access to the Greeley, Colorado, plant bulletin boards and all places where notices to workers are customarily posted; immediately on request of the union, for a period of 2 years from the date on which the notice is posted or until the regional director has issued an appropriate certification following a fair and free election, whichever comes first, permit a reasonable number of union representatives access for reasonable periods of time to nonwork areas within its Greeley, Colorado, plant so that the union may present its views on unionization to the workers, orally and in writing, in such areas during changes of shift, breaks, mealtimes, or other nonwork periods; in the event that during a period of 2 years following the date on which the notice is posted, or until the regional director has issued an appropriate certification following a fair and free election, whichever comes first, any supervisor or agent of the employer convenes any group of workers at the employer’s Greeley, Colorado, plant and addresses them on the question of union representation, give the union reasonable notice thereof and afford two union representatives a reasonable opportunity to be present at such speech and, on request, give one of them equal time and facilities to address the workers on the question of union representation; in any election which the Board may schedule at the employer’s Greeley, Colorado, plant within a period of 2 years following the date on which the notice is posted and in which the union is a participant, permit, on request by the union, at least two union representatives reasonable access to the plant and appropriate facilities to deliver a 30-minute speech to workers on working time, the date thereof not to be more than 10 working days but not less than 48 hours before any such election; and it is further ordered that the election conducted on June 24, 1983, be set aside. The firm’s headquarters is in Richmond, Virginia. The violations occurred at a facility in Lexington, Kentucky. The employer is engaged in the interstate transportation of freight. Sixty-four percent of its $14.6 million in fiscal year 1993 federal contract dollars ($9.3 million) are with the Department of Agriculture. These contract dollars went to its parent firm, Union Pacific Corporation. The firm violated section 8(a) (1) and (3) of NLRA. The firm discharged three workers for their union organizing activities and took numerous unlawful actions to discourage employees from union activity, including “coercively” interrogating employees. told workers it thought the union had a “plant” in the terminal and that the employer thought it was a certain named employee, creating the impression among workers that their concerted organizing activities were under surveillance; asked workers who the “plant” was and which workers were supporting the union; told workers it wanted to know who the “plant” was so it could get rid of him; asked workers to keep their eyes and ears open and report any workers engaged in distributing leaflets to management immediately; told workers if union organizers came in the gate, to close the gate, so employer could have them arrested, the employer manifested animus toward the union by restraining and coercing workers in the exercise of their protected rights; told workers it was okay for them to “beat the hell” out of the union organizers; told workers the employer had an open door policy, that they did not need a union; told workers that its unionized plants did not have a contract, tending to make its workers believe unionization was futile; told workers top management said that if the workers selected the union, employer would close its doors; and discriminatorily discharged three workers for leading union organizing activities. The firm was ordered to offer to recall the three workers for immediate and full reinstatement to their former jobs and make them whole, with interest, for any loss of earnings or benefits they may have suffered as a result of their discharge. The firm’s headquarters is in Springdale, Arkansas. The violations occurred at its facility within the same state in the city of Dardanelle. The firm is engaged in the processing of poultry products. Eighty-one percent of its $10.8 million in fiscal year 1993 federal contract dollars ($8.8 million) are for the Department of Agriculture. This firm violated section 8(a)(1) and (5) of NLRA. The firm took numerous unlawful actions to discourage employees from union activity, including assisting a decertification drive in which the firm withdrew recognition of the union and unilaterally implemented changes in wages and working conditions. At this time, workers were threatened, interrogated, and solicited for their signatures on a decertification petition. directed, controlled, circulated, and assisted in the circulation of a promised workers wage increases, bonuses, and other benefits if the workers would decertify the union; threatened workers with the loss of wage increases, bonuses, and other benefits if the workers did not decertify the union; while engaged in the training of new workers, told these workers that the union could do no more for them than the employer and thus discouraged support for the union and encouraged bypassing the union and dealing directly with the company; surveyed and interrogated workers concerning their union sympathies and preferences by observing them as they were solicited by employer’s agent for their signatures on a decertification petition; failed and refused to bargain with the union as the exclusive collective-bargaining representative of its workers in the above-noted unit; withdrew recognition of the union as the exclusive collective-bargaining representative of its workers; unilaterally implemented the following changes in wages and working conditions: instituting a performance bonus of between 2 and 3-1/2 percent; implementing a wage increase; and increasing shift premiums; unilaterally implemented a new attendance policy and a new service award and attendance award program; refused to furnish the union with information which it requested; and interfered with workers discussing union business on nonwork time in nonwork areas. The firm was ordered to recognize and, on request, bargain with the union as the exclusive representative of the workers concerning terms and conditions of employment and, if an understanding is reached, embody the understanding in a signed agreement; on request of the union, rescind any or all of the changes it has unilaterally implemented on or after the date it unlawfully withdrew recognition from the union, including, but not limited to, a performance bonus of between 2 and 3-1/2 percent, a wage increase, an increased shift premium, a new attendance policy, and a new service award and attendance award program; and furnish the union information it requested in its letter of July 9, 1991, and, on request, furnish the union any other necessary and relevant information which it may request in furtherance of its role as bargaining representative of the workers. The firm’s headquarters is in Cheshire, Connecticut. The violations occurred in the same state in the city of Windsor-Locks. The firm was under contract to perform janitorial services for United Technologies Corporation at its Hamilton-Standard Division plant. Eighty-two percent of its $512,000 in fiscal year 1993 federal contract dollars ($421,000) are with the Department of Defense to provide janitorial services. The firm violated section 8(a)(1) and (3) of NLRA. The firm discharged three employees because of their support for the union. Also, the firm took numerous unlawful actions against employees during an organizing campaign. At the time of these discharges, only one challenged ballot was blocking the union’s certification as bargaining representative. coercively interrogated workers as to their support for the union; threatened to discharge workers to discourage them from supporting the union; threatened to freeze their wages to induce them to vote against the union; warned them that its records could be changed to facilitate their discharge if they continued to support the union; created the impression among them that their union activities were being kept under surveillance; informed them in effect that it was futile for them to support the union; warned them that they were gambling with their jobs if they voted for the discharged three workers because they supported the union; and refused to grant permission to an employee to leave the Hamilton-Standard plant because the employee supported the union. The firm was ordered to offer the above three workers immediate and full reinstatement to their former jobs or, if those jobs no longer exist, to substantially equivalent positions, and make them whole for any loss of earnings and other benefits suffered as a result of the discrimination against them. The firm’s headquarters is in Manchester, Connecticut, also where the violations occurred. This firm operates three nursing homes which provide inpatient medical and professional care services for the elderly and infirm. All of its $14,000 in fiscal year 1993 federal contract dollars are to the Department of Veterans Affairs to provide nursing home care. These contract dollars went to its parent firm, Health Care Retirement Corp. Amer. Its current parent firm is Health Care Facilities. The firm violated section 8(a)(1)(3) and (5) of NLRA. The firm took numerous unlawful actions to discourage union activity, including refusing to hire one employee and denying another employee a wage increase because of their union activities. The firm also threatened employees with changes in the terms and conditions of employment because of their union membership. changed its practice of granting wage increases to its licensed practical nurses without prior notice to the union and without affording the union an opportunity to bargain with employer with respect to this conduct and the effects of this conduct; removed work from its therapeutic recreational directors without prior notice to the union and without affording the union an opportunity to bargain with employer with respect to this conduct and the effects of this conduct; bypassed the union and dealt directly with its therapeutic recreational directors regarding hours and other terms and conditions of employment; interrogated workers regarding their membership in the union; threatened its workers with unspecified reprisals because of their membership in or activities on behalf of the union; threatened workers with changes in their terms and conditions of employment because of their membership in or activities on behalf of the union; created the impression among its workers that their activities on behalf of the union were under surveillance; solicited employee complaints and grievances in order to discourage its workers’ membership in, or activities on behalf of the union; promised its workers increased benefits and improvements in the terms and conditions of employment in order to discourage their membership in, or activities on behalf of the union; informed an employee that she could not be hired because of her membership in, or activities on behalf of the union; conditioned this employee’s employment upon refraining from membership in, or activities on behalf of the union; threatened this employee with surveillance if she became a member of or engaged in activities on behalf of the union; and constructively discharged this employee because of her membership in or activities on behalf of the union. The firm was ordered to on request, bargain in good faith with the union concerning removing unit work from its therapeutic recreational directors, wages, hours and other conditions of employment of its therapeutic recreational directors, and the granting of wage increases to its licensed practical nurses; make the employee whole for any difference between her initial 5-percent wage increase promised to her and based upon her annual appraisal, and the bonus granted to her in lieu of such 5-percent wage increase, for the period beginning the date she received her first bonus payment, until she quit her job in July 1993; offer to the employee the firm had refused to hire full and immediate reinstatement to the position of a licensed practical nurse or a substantially equivalent position, if such position no longer exists without prejudice to her seniority and other privileges previously enjoyed; and make this employee whole for any loss of earnings suffered by her as a result of the discrimination against her. The firm’s headquarters is in Oak Brook, Illinois. The violations occurred in the West Jordan, Utah, area. The firm is engaged in the pickup and disposal of waste. Forty percent of its $224.8 million in fiscal year 1993 federal contract dollars ($90.2 million) are with the Department of Energy. These contract dollars went to its parent firm, WMX Technologies, Inc. The firm violated section 8(a)(1)(2)(3) and (5) of NLRA. The firm discharged an employee and took numerous other unlawful actions to discourage union activity, including threatening employees with loss of current wages and benefits if the union were voted in. Also created employer-dominated committees during an organizing drive; then dissolved these committees after the union lost and filed objections to the election. The Board issued both a broad cease and desist order and a bargaining order because of “manifold violations of the Act, in combination with the union’s card majority.” The decision also refers to the “pervasive and serious nature of the employer’s misconduct.” failed and refused to recognize and bargain with the union; established and dealt with the routing and productivity, safety, and benefits committees concerning terms and conditions of employment; promulgated and announced the likely adoption in August of new programs initiated by the benefits and safety committees; announced that several new programs initiated by the benefits and safety committees were in effect; instituted several new programs initiated by the benefits and safety committees and adjusted the drivers’ routes after conferring with the routing and productivity committee; retracted the new instituted programs initiated by the benefits and safety committees because the union filed objections to the election; issued a written warning to an employee and concomitantly deemed his hydrant accident chargeable, because of his union sympathies and activities; discharged the above employee and concomitantly deemed his wall-pokeas chargeable and accorded prejudicial weight to his landfill and windshield incidents, because of his union sympathies and activities; issued a written reprimand to an employee supposedly for insubordination, because of his union sympathies and activities; decided an employee’s ice-related accident was chargeable because of his union sympathies and activities; promised workers that the employer would remedy their complaints if they rejected the union; told one employee he would lose existing wages and benefits, with restoration dependent upon negotiation, should the union be voted in and implied to this employee that the employer would close its doors, costing the workers their jobs, if the union got in; told one employee that the workers would lose existing wages and benefits “until they’re negotiated for” should the union get in; said that employer “would risk ULPs to keep the union out,” thereby indicating that the organizational effort was a futility; questioned an employee as to why he thought the workers needed a union; implicitly threatened the job security of workers not wishing to participate on the committees then being formed; promised through the three committees to remedy employee complaints to discourage their support of the union; promised to remedy the complaints of workers’ wives—and by implication their husbands’—to discourage support for the union; questioned an employee as to whether the employer had his support in the promised an employee unspecified benefits if he voted against the union; questioned an employee why he had raised his hand at a picnic to indicate his support of the union; told workers, in substance, that union representation would be a futility; promised that their complaints would be remedied if they rejected the union; and raised the prospect of closure and job loss if they brought the union in; promised to remedy a complaint raised by an employee to discourage his support for the union; and announced that the newly instituted programs emanating from the benefits and safety committees “couldn’t be put into effect” because the union had filed objections to the election; asked those with knowledge of the objections to show themselves; and urged those with “pull” to try to get the objections dropped so that employer could put these benefits in. The firm’s headquarters is in Boston, Massachusetts. The violations occurred in New Haven, Connecticut. This firm operates a nursing home providing inpatient medical and professional care services for the elderly and infirm. All of the fiscal year 1993 federal contract dollars ($1.1 million) are with the Department of Veterans Affairs to provide nursing home care. The U.S. Court of Appeals (2nd Circuit, Jan. 11, 1994) modified the Board’s decision. This summary reflects the case after incorporating the Court of Appeals decision. threatened its workers with discharge if they refused to become a member and execute a dues-checkoff card on behalf of Local 1115. The firm was ordered to withdraw and withhold all recognition from Local 1115 as the collective-bargaining representative of its workers at its New Haven, Connecticut, facility unless said labor organization has been duly certified by the NLRB as the exclusive representative of such workers; jointly and severally, with Local 1115 reimburse its past and present workers, for all dues and other money’s withheld from their pay pursuant to the collective-bargaining agreement executed on October 16, 1989, or by any successor agreement thereto, plus interest; offer the worker immediate and full reinstatement to the worker’s former position or, if that job no longer exists, to a substantially equivalent position; and jointly and severally, with Local 1115, make the worker whole for any loss of earnings the worker may have suffered as a result of the discharge. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the extent to which federal contractors violate the National Labor Relations Act (NLRA), focusing on: (1) the characteristics associated with these NLRA violators; and (2) ways to improve federal contractors' compliance with NLRA. GAO found that: (1) in 1993, six firms held 90 percent of the federal contracts awarded to NLRA violators; (2) the cases brought to the National Labor Relations Board (NLRB) mainly involved workers' rights, collective bargaining, and discrimination violations; (3) NLRB remedies mainly included the reinstatement of unlawfully fired workers, restoration of workers' job status, payment of back wages or benefits, collective bargaining orders, and orders to cease threatening workers with job loss; (4) the NLRB remedies affected nearly 1,000 individual workers and thousands of additional workers represented by 12 bargaining units; (5) most of the NLRA violators were Departments of Defense and Energy contractors; (6) 15 of the 80 violators had to reinstate or restore more than 20 individuals each, NLRB cease and desist orders issued against them, or a history of NLRA violations; and (7) NLRB could enhance its enforcement of NLRA by collecting judgments against violators from their federal contract awards and increasing coordination with the General Services Administration (GSA) to identify such violators. |
The U.S. economy has become increasingly oriented toward international trade, with exports and imports together representing about one-quarter of U.S. gross domestic product (GDP) in 1996. As the largest regional market for U.S. products, accounting for approximately $242 billion or 40 percent of U.S. exports in 1996, the Western Hemisphere is of growing importance to U.S. commercial interests. Canada and Mexico are by far the largest U.S. trade partners in the hemisphere, accounting for approximately two-thirds of total U.S. exports of goods to the region. Countries in the Western Hemisphere also constitute about 30 percent of total U.S. foreign direct investment. During the 1960s and 1970s, most countries in Latin America and the Caribbean experimented with various arrangements to promote subregional economic integration and free trade. These initiatives were generally frustrated by trade and investment restrictions characteristic of these countries’ protective economic development strategies. By the late 1980s, faced with stagnant economies and mounting external debt, countries in the region began to move away from these restrictive policies and initiated market-oriented reforms to stimulate economic growth. Although these reforms were primarily intended to address domestic economic problems, they also facilitated trade liberalization efforts. Moreover, the U.S.-Canada Free Trade Agreement in 1988 signaled a new commitment on the part of North American countries to regional trade liberalization. By the early 1990s, almost all countries in the hemisphere were engaged in multilateral or bilateral efforts to liberalize trade. After a decade of economic decline, Latin American economies have rebounded in the 1990s, and the region now represents the second fastest growing area in the world after Southeast Asia. The 1994 Miami Summit of the Americas gave new impetus to trade liberalization efforts in the region. At Miami, the 34 democratically elected leaders of countries in the Western Hemisphere agreed to conclude a free trade agreement no later than 2005, with concrete progress by the turn of the century. The summit declaration committed participating governments to negotiate, among other things, the elimination of barriers to trade in goods and services as well as investment and to provide rules in such areas as intellectual property rights and government procurement. The plan of action adopted at Miami called for two meetings of trade ministers (“ministerials”) to reach agreement on the key principles upon which to base the FTAA. These two ministerials, held in Denver, Colorado (1995), and Cartagena, Colombia (1996), established a series of working groups to gather data and make recommendations to the ministers in preparation for FTAA negotiations. A third ministerial took place in Belo Horizonte, Brazil, earlier this year. The six major multilateral trading arrangements among countries of the Western Hemisphere are NAFTA, Mercosur, the Andean Community, the Caribbean Community, the Central American Common Market, and the Latin American Integration Association (LAIA). (See figs. 1 and 2.) The United States is a party only to NAFTA. There are also over 20 smaller multilateral and bilateral free trade accords among countries in the region. NAFTA, the most comprehensive trade arrangement in the region, was concluded in 1992 by Canada, Mexico, and the United States and became effective in January 1994. NAFTA created the world’s largest free trade area, with a combined population of nearly 400 million and a combined GDP of almost $8 trillion. NAFTA provides for the gradual elimination of tariff barriers on most goods over a 10-year period. It covers trade in services, provides protection for investment and intellectual property rights, applies rules to government procurement, and contains a dispute settlement system. A distinct feature of NAFTA is the two side agreements on labor and the environment, designed to institutionalize efforts to (1) improve working conditions and living standards in each country and (2) address and resolve environmental issues that may arise between the parties. Mercosur was created in March 1991 by Argentina, Brazil, Paraguay, and Uruguay. Comprising a population of approximately 200 million and with a combined GDP of about $851 billion, Mercosur is the world’s third largest integrated multinational market, after NAFTA and the EU. Mercosur currently functions as a customs union, providing not only for a free trade area but also for the establishment of a common external tariff. The external tariff instituted in 1995 is not to exceed 20 percent for most imports. Today, approximately 85 percent of imports from outside the bloc enter under the common external tariff, and about 90 percent of all intra-Mercosur trade is duty free. Mercosur includes a commitment by member countries to coordinate more disciplined macroeconomic policies. Also, Mercosur countries are committed to agree on a common foreign trade policy. Unlike NAFTA, Mercosur lacks agreements on intellectual property rights and government procurement. Further, while Mercosur calls for coordination on trade in services, the U.S. International Trade Commission reports that there is no fixed schedule for liberalization in this area. Besides NAFTA and Mercosur, there are four older subregional multilateral trade groupings in the Western Hemisphere. Three of these groupings—the Andean Community, the Caribbean Community, and the Central American Common Market—are customs unions at varying stages of implementation. They have all recently taken steps to further liberalize trade and promote economic integration. The fourth subregional trade arrangement, LAIA, is a network of agreements granting tariff preferences for certain product categories to member countries. In addition to the larger trade blocs previously discussed, there are more than 20 smaller multilateral and bilateral trade accords among the countries of the Western Hemisphere. Many of these were established during the 1990s. Five of these arrangements involve our NAFTA partners Canada and Mexico. Mexico-Chile Free Trade Accord (1992). This agreement calls for a phased tariff elimination between the parties. It excludes many product categories such as agricultural commodities. Mexico and Chile are currently in the process of renegotiating their 1992 agreement in an effort to broaden its scope. Mexico-Costa Rica Free Trade Agreement (1995). This agreement is generally modeled on NAFTA but excludes many agriculture and energy products. Mexico-Bolivia Free Trade Agreement (1995). This is similar to the Mexican agreement with Costa Rica. Group of Three Agreement—Mexico, Colombia, and Venezuela (1995). The Group of Three Agreement calls for the total elimination of tariffs over a 10-year period with some exceptions in the textile, petrochemical, and agricultural sectors. In addition, the arrangement includes agreements on services, intellectual property rights, government procurement, and investment. Canada-Chile Free Trade Agreement (1996). The Canada-Chile Free Trade Agreement provides for tariff elimination and contains side agreements on labor and the environment. However, it excludes, among other items, financial services and intellectual property rights. At the FTAA ministerial meetings in Denver, Cartagena, and Belo Horizonte, 12 working groups were established for the purpose of collecting information in preparation for FTAA negotiations. At Belo Horizonte, trade ministers issued a declaration calling for formal FTAA negotiations to be launched by Western Hemisphere leaders at their next summit in April 1998. While the ministers agreed on several other key issues, there is still disagreement among participating countries on the approach formal negotiations should follow. The areas of responsibility assigned to the 12 FTAA working groups reflect some of the priorities of the United States and other countries in the hemisphere (see table 1). For example, there are working groups on intellectual property rights and government procurement, issues of key interest to the United States; on subsidies, antidumping, and countervailing duties, areas of special concern to Argentina; and on smaller economies, a priority for Caribbean countries. The United States chairs the Working Group on Government Procurement. According to administration officials, there are also some issues of particular U.S. interest, such as labor and the environment, that are not fully addressed by any of the existing working groups. USTR officials noted that the United States has participated in all of the meetings and other activities of each working group. The working groups were established to collect basic information on key issues in preparation for FTAA negotiations. U.S. and OAS officials explained that the working groups have been the mechanism for accelerating progress on the priorities of participating countries. Progress in meeting the information mandates set forth at the ministerials differs for each of the 12 working groups. For example, the Working Group on Investment is particularly advanced, having prepared a comprehensive technical compendium on investment treaties in the region. This compendium was published at the Belo Horizonte ministerial in May 1997. According to both U.S. and OAS officials, the Working Group on Investment has also made considerable progress, exchanging views on elements that could be included in a FTAA investment chapter, including investor protection, national treatment, and dispute settlement. Progress in other working groups has been more modest. For example, the Working Group on Market Access reported in February 1997 that many countries had yet to submit the schedules and statistics required to prepare a hemispheric data base on tariff structures and nontariff measures. Moreover, the Working Group on Dispute Settlement, which was only established in May 1997, has not yet met. A Tripartite Committee, made up of the OAS, the IDB, and the United Nations Economic Commission on Latin America and the Caribbean, was formed after the first ministerial in Denver to provide analytical support to the working groups as requested. Each organization in the Tripartite Committee is responsible for providing technical support to the FTAA process through the working groups. For example, the IDB is collecting trade statistics to assist the Working Group on Market Access, while the OAS has provided support to other groups on trade policy issues, such as subsidies and competition policy. At this time, the Tripartite Committee’s role in support of the FTAA is anticipated to be transitory. The countries are considering the possibility of establishing a temporary FTAA secretariat during the negotiations. At the Belo Horizonte meeting, ministers directed the Tripartite Committee to conduct a feasibility study based on the agreed functions of a temporary secretariat. This study is to be reported to the vice ministers at their meeting scheduled to take place in October 1997. In preparation for the ministerial meeting in Belo Horizonte, various countries and subregional blocs involved in the FTAA process submitted proposals for the overall strategy they would like to see pursued in formal FTAA negotiations. At the ministerial, consensus was reached on several key issues advanced in these proposals. A joint declaration issued at Belo Horizonte called for formal FTAA negotiations to be launched by the next summit of Western Hemisphere leaders scheduled to take place in Chile in April 1998. In the declaration, countries agreed that the FTAA would be consistent with member countries’ commitments under the WTO and the FTAA. Moreover, countries agreed that the FTAA would coexist with, rather than supplant, existing subregional trade arrangements, such as NAFTA or Mercosur, to the extent that rights and obligations under these agreements are not covered or go beyond rights and obligations under the FTAA. The declaration also recognized the right of participating countries to negotiate independently or as members of subregional trade groupings, and the need to establish a temporary administrative secretariat to support future negotiations. Finally, the declaration reiterated the commitment of participating countries to conclude a trade agreement encompassing the entire hemisphere by 2005 at the latest. At the Belo Horizonte ministerial, participating countries also agreed to set up a Preparatory Committee at the vice ministerial level that will make recommendations for FTAA negotiations. The establishment of a Preparatory Committee signals a new level in the FTAA process. It indicates participating countries expect concrete results in preparing for negotiations. The Preparatory Committee is supposed to meet at least three times between May 1997 and February 1998, when the next FTAA ministerial is scheduled to take place in San José, Costa Rica. At the San José ministerial, trade ministers are committed to reach agreement on the objectives, approaches, structure, and location of the FTAA negotiations, based on the recommendations of the Preparatory Committee. Still, there is disagreement among participating countries on the pace and direction of formal negotiations. Most countries, including the United States, would prefer that formal FTAA negotiations on all issues commence during the next summit of regional leaders in 1998 and conclude no later than 2005. The members of Mercosur, however, have proposed that negotiations proceed in three phases: (1) in 1998 and 1999, countries would agree on and begin to implement “business facilitation” measures, such as adopting common customs documents or harmonized plant and animal health certificates; (2) from the year 2000 to 2002, work would begin on “standards and disciplines,” including antidumping and countervailing duty rules, and market access for services; and (3) from 2003 to 2005, other disciplines and market access issues would be negotiated, including tariff reductions, a key concern of the United States. No other countries appear to support Mercosur’s phased approach to negotiations. Adverse economic developments in Mexico in the months immediately following the 1994 Miami Summit raised U.S. concerns about pursuing further free trade initiatives in the region. While U.S. officials were debating the future course of U.S. involvement in regional trade efforts, other countries in the hemisphere began pursuing their own agenda, both deepening commitments under existing trade blocs and establishing new bilateral agreements. In principle, these efforts may be consistent with U.S. goals to promote free trade. In practical terms, lack of U.S. participation in shaping these agreements has created disadvantages for some U.S. exporters’ access to markets in the region. These disadvantages are beginning to be felt in various sectors, including agriculture, telecommunications, pharmaceuticals, and the automotive industry. According to representatives of several Western Hemisphere countries, regardless of whether the United States resumes a more active role in shaping regional trade liberalization efforts, their countries will continue their own initiatives toward free trade and economic integration, even if these efforts do not coincide with U.S. interests. Moreover, these officials noted that it is essential for the U.S. administration to obtain fast track authority in order to make meaningful progress toward achieving the FTAA. In launching the FTAA at the Miami Summit, the United States was building on the momentum for free trade generated by the passage of NAFTA a year earlier. NAFTA was more comprehensive than any other agreement in the Western Hemisphere. It not only covered traditional tariff and nontariff issues but also placed important obligations on member countries in matters such as investment, government procurement practices, customs procedures, and trade in services. At the time, NAFTA was generally regarded as a blueprint for further trade liberalization in the region. Moreover, U.S. leadership was evident in its support of negotiations on Chile’s accession to NAFTA. Only days after the summit, however, Mexico was hit by a serious financial crisis, with spillover effects in other Latin American economies. The commitment by the U.S. government of significant resources to stem and resolve the crisis raised concerns in the United States about further regional trade liberalization efforts. In the intervening period, fast track authority lapsed. Although U.S. participation in the FTAA preparatory process continued, the executive branch has been constrained from pursuing other tariff liberalization negotiations in the region. Formal negotiations on Chilean accession to NAFTA, for example, were suspended in 1995. While debate continues in the United States regarding further regional trade liberalization efforts, other countries in the region have proceeded to negotiate new trade agreements and deepen their participation in existing arrangements. Chile has been at the forefront of this trend; it has negotiated a network of free trade agreements with several countries in the region, including Venezuela and Colombia. In 1996, Chile also concluded a free trade arrangement with Mercosur, becoming in effect an associate member of that trade bloc. Under this arrangement, Chile and the Mercosur countries will phase out tariffs on products traded among them, but Chile will not adopt Mercosur’s common external tariff. Chile’s pursuit of free trade is not limited to South America. The Canada-Chile Free Trade Agreement, which became effective on July 1, 1997, is modeled on NAFTA and is intended as a provisional agreement to facilitate Chilean accession to NAFTA. Nevertheless, as noted earlier, there are some differences between this bilateral agreement and NAFTA, reflecting some of the areas where Chilean and Canadian interests differ significantly from those of the United States. For example, under their bilateral agreement, Chile and Canada are committed to forgo imposing antidumping and countervailing duties within 6 years after the agreement goes into effect. NAFTA, on the other hand, does not affect member countries’ ability to unilaterally impose antidumping measures and countervailing duties. In addition to its trade negotiations with Canada, Chile has cultivated close commercial relations with Mexico, our other NAFTA partner. Currently, Chile and Mexico are renegotiating their 1992 free trade agreement to make it more compatible with NAFTA. Mexico, in turn, has been extending its own web of bilateral trade agreements throughout the hemisphere. As noted earlier, Mexico has concluded bilateral free trade agreements with Costa Rica and Bolivia and has a trilateral arrangement with Colombia and Venezuela. Mexico is also negotiating free trade agreements with Ecuador, El Salvador, Guatemala, Honduras, Panama, and Peru. In addition, Mexico plans to negotiate a transitional agreement with Mercosur that will cover key areas, such as market access, government procurement, intellectual property rights, and investment. Mercosur has been another focus of subregional trade initiatives since the Miami Summit. In addition to the arrangement with Chile, Mercosur has concluded a free trade agreement with Bolivia and is engaged in negotiations to widen its reach to other Andean Group countries. Mercosur and Mexico are also scheduled to begin trade negotiations later this year. Beyond the Western Hemisphere, Mercosur has concluded a framework agreement on trade with the EU and there are discussions aimed at establishing a free trade area encompassing the two trade blocs (see fig. 3). Mercosur has not only been broadening its network of agreements with other countries, it has also been deepening the level of economic integration among the four original member countries. As noted earlier, in 1995 Mercosur countries instituted the common external tariff, which is currently applied to about 85 percent of imports from outside the bloc. Trade among Mercosur member countries has almost tripled, from approximately $5 billion in 1991 to $14.5 billion in 1995—the last year for which figures were available. Lack of U.S. participation in shaping emerging Western Hemisphere trade agreements has created disadvantages for some U.S. exporters’ access to these markets. By lowering or eliminating tariffs among participating countries, subregional free trade agreements that exclude the United States result in comparatively higher duties for U.S. exports. For example, Chile’s network of bilateral trade agreements has given Chilean agricultural products an edge over U.S. exports in South America. Thus, while Chilean apples enter many South American markets duty free, Washington State apples face 10 to 25 percent tariffs. In recent years, Washington growers have seen their share of these markets dwindle as Chile capitalizes on its tariff preferences. Like Chile’s arrangements with other South American countries, the Canada-Chile agreement has already yielded benefits for Canadian firms not enjoyed by U.S. companies. Recently, Canada’s Northern Telecom won a nearly $200-million telecommunications equipment contract in Chile. According to the State Department, the choice of Northern Telecom over U.S. companies was at least in part due to the fact that buying from a U.S. producer would have meant an additional $20 million cost in duties relative to purchasing from Canada. While U.S. exports to Mercosur countries have been growing, U.S. exporters will likely face increasing difficulties in penetrating markets in Mercosur countries as commitment to common bloc trade policies deepens. For example, a USTR official noted that Mercosur is currently considering adopting product safety standards that are quite different from U.S. standards. This official explained that if these standards are adopted, U.S. auto manufacturers could be at a disadvantage in accessing the growing markets of Mercosur member countries. Mercosur’s position on the recent WTO Information Technology Agreement also provides an indication of how the bloc’s common foreign trade policy will complicate U.S. efforts to promote its economic interests in the region. The Information Technology Agreement, which was signed by 28 WTO members in Singapore in December 1996, provides important tariff concessions in an industry in which the United States enjoys a considerable competitive advantage. Brazil did not join in the Information Technology Agreement, seeking to protect its own emerging information technologies industry. Brazil’s position on the agreement has now been adopted as an element of Mercosur’s common external trade policy, while other partners like Argentina, if acting individually, might have taken a different position. The difficulties faced by the U.S. pharmaceutical industry in the Argentine market also illustrate some of the drawbacks encountered by U.S. firms as countries in the region drift away from the long-standing U.S. concern regarding intellectual property protection. In a recent statement before the Trade Subcommittee of the House Ways and Means Committee, the President of the Pharmaceutical Research and Manufacturers of America estimated that annual losses by member companies due to patent infringement in Argentina amount to several hundred million dollars. He noted that NAFTA has the strongest safeguards for intellectual property rights of any trade agreement, and concluded that if Argentina had been brought into NAFTA, that government would have had to seek to curtail patent infringement more decisively than it does now. It is worth noting that Argentina’s former Finance Minister favored joining NAFTA rather than integrating further within Mercosur. However, after NAFTA negotiations with Chile were suspended, it became clear that prospects for Argentine accession to NAFTA were rather distant, and Argentina proceeded to cement its position within Mercosur. Western Hemisphere leaders have indicated their countries will continue their own initiatives toward free trade and economic integration. For example, in statements during his recent visit to the United States, the President of Chile said that his country shares the U.S. interest in promoting free trade. Elaborating on his President’s remarks, a Chilean government spokesman on trade issues explained that, like the United States, Chile would like to see the widest and most comprehensive agreement possible on free trade for the Western Hemisphere. According to this official, whether through NAFTA or the FTAA, with or without the United States, Chile intends to continue to pursue trade liberalization because it is seen as furthering Chile’s own interests. Chile still wants to join NAFTA, but NAFTA is now less critical to Chile than it was in 1995. Like Chile, Canadian interests in regional trade liberalization generally coincide with those of the United States. However, the recent Canada-Chile free trade agreement demonstrates that Canada is pursuing its commercial interests in the region. Indeed, the Canadian Minister of International Trade recently indicated that his government is considering negotiating a trade agreement with Mercosur. According to a Canadian government spokesman on trade policy, Canada’s free trade agreement with Chile was not only meant to expedite Chilean accession to NAFTA, but it was also intended to keep alive the momentum for free trade in anticipation of FTAA negotiations. Canada would like to see decisive U.S. participation in FTAA negotiations because the two countries share many interests with regard to trade. This official explained that it would be unfortunate if the United States lacked fast track authority by the time of the 1998 Santiago Summit, as it would be at a distinct disadvantage in shaping the FTAA. It would appear that Mexico’s interests in regional trade liberalization parallel those of Chile and Canada. However, some observers suggest that Mexico may be reluctant to surrender the current advantage it enjoys in terms of access to North American markets. Nevertheless, according to Mexican government trade officials, all of Mexico’s agreements and negotiations with other countries in the hemisphere have sought to encourage the adoption of trade disciplines consistent with NAFTA. These officials explained that Mexico has actively supported Chilean accession to NAFTA and the concept of a free trade agreement that would encompass the entire hemisphere. Moreover, they noted that Mexico is committed to the principles of free trade and will continue to pursue free trade arrangements with other countries in the hemisphere and other regions. In contrast to the NAFTA partners and Chile, the Mercosur countries’ vision of the FTAA differs significantly from that of the United States. As the largest member of Mercosur, Brazil has sought to shape the FTAA process to make it consistent with its distinct trade priorities. Since the FTAA would entail broadening Brazil’s ongoing market-opening efforts, Brazil favors a slower managed approach to hemispheric trade liberalization. This would give its industries more time to adjust to foreign competition. Thus, Brazil has proposed that FTAA negotiations on market access be deferred until 2003, while the United States would like to see this matter addressed as soon as negotiations begin in 1998. A Brazilian government spokesman noted that if U.S. negotiators lack fast track authority in 1998, FTAA negotiations would still be able to reach agreement on business facilitation measures. These include items such as common customs documents, which would not require legislative approval. In this case, discussions on market access would be deferred, as favored by Mercosur in general and by Brazil in particular. In preparing this report, we relied on our past and ongoing work on Western Hemisphere trade issues. Our description of existing subregional and bilateral trade arrangements is based primarily on a review of documents on these arrangements from academic and technical publications. For our discussion on the status of FTAA negotiations and recent trade developments in the region outside the FTAA process, we interviewed officials from the OAS, IDB, USTR, the U.S. International Trade Commission, and the U.S. Department of State; representatives from five other Western Hemisphere nations at the forefront of regional trade negotiations; and academicians and other experts on the process of regional economic integration. We also reviewed documents on the FTAA prepared by the OAS Trade Unit and the FTAA working groups; declarations and supporting documentation from the Miami Summit and the three FTAA ministerial meetings that have taken place thus far; and reports from USTR, the U.S. Department of Commerce, the U.S. International Trade Commission and the Congressional Research Service. In addition, we attended several conferences and congressional hearings dealing with various aspects of the FTAA process. In order to provide some indication of the relative size of markets in the region, we prepared tables on the principal Western Hemisphere trade groupings presented in the appendix. These tables are based on data for individual countries in the region from the International Monetary Fund’s Publications International Financial Statistics and Direction of Trade Statistics. We used 1994 figures for these tables because that is the latest year for which information was available for most countries in the region. For certain countries we used 1993 data, when 1994 data were not available. We conducted our review from February to June 1997 in accordance with generally accepted government auditing standards. USTR provided technical comments on a draft of this report, and we have incorporated them in the text where appropriate. USTR did not provide any evaluation of the overall thrust of the report. We are sending copies of this report to USTR, the Secretaries of Commerce and State, and interested congressional committees. We will make copies available to others on request. Please call me at (202) 512-8984 if you have any questions concerning this report. Major contributors to this report were Elizabeth Sirois, Assistant Director; Juan Gobel, Evaluator-in-Charge; Emil Friberg, Senior Economist; and Patricia Cazares, Evaluator. Currently, there are six major multilateral trading blocs in the Western Hemisphere. Following is a general profile of each of these blocs, including information on membership, gross domestic product (GDP), per capita gross domestic product, and the bloc’s total exports, using data from 1994, except as noted. Established in 1969 (formerly Andean Pact or Andean Group). Established in 1973 as successor to the Caribbean Free Trade Association (CARIFTA, established in 1967). Established in 1961. Established in 1991. Established in 1980 as a successor to the Latin American Free Trade Association (LAFTA, established in 1960). Established in 1994. Budget Issues: Privatization Practices in Argentina (GAO/AIMD-96-55; Mar. 19, 1996). Mexico’s Financial Crisis: Origins, Awareness, Assistance, and Efforts to Recover (GAO/GGD-96-56; Feb. 23, 1996). NAFTA: Structure and Status of Implementing Organizations (GAO/GGD-95-10BR; Oct. 7, 1994). U.S.-Chilean Trade: Pesticide Standards and Concerns Regarding Chilean Sanitary Rules (GAO/GGD-94-198; Sept. 28, 1994). North American Free Trade Agreement: Assessment of Major Issues (GAO/GGD-93-137; Sept. 9, 1993; 2 vols.). U.S.-Chilean Trade: Developments in the Agriculture, Fisheries, and Forestry Sectors (GAO/GGD-93-88; Apr. 1, 1993). CFTA/NAFTA: Agricultural Safeguards (GAO/GGD-93-14R; Mar. 18, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on efforts to liberalize trade among the countries of the Western Hemisphere, focusing on: (1) the principal existing subregional trade arrangements in the Western Hemisphere; (2) the current status of Free Trade Area of the Americas (FTAA) discussions; and (3) certain recent developments in regional trade liberalization outside the FTAA process since "fast track" authority. GAO noted that: (1) almost all countries in the region participate in at least one subregional trade grouping; (2) there are now six major subregional multilateral trade groupings among countries in the hemisphere; (3) the two most significant trade blocs, the North American Free Trade Agreement (NAFTA) and the Common Market of the South, known as Mercosur, were both established during the 1990's; (4) NAFTA, the only one of these arrangements to which the United States is a party, created the world's largest free trade area and is the most comprehensive trade agreement in the region; (5) Mercosur has followed a different approach than NAFTA to economic integration through the creation of a customs union; (6) in addition to the major multilateral trade groupings, there are more than 20 smaller trade agreements in the region, most of these have been concluded during the 1990's; (7) U.S. Trade Representative (USTR), Organization of American States (OAS), and Inter-American Development Bank (IDB) officials note that the FTAA working groups have made significant progress to support the launching of formal negotiations; (8) according to these observers, progress in the FTAA process thus far exceeds what had been achieved during the first 2 to 3 years of the Uruguay Round negotiations that led to the establishment of the World Trade Organization (WTO); (9) substantial agreement has been reached on several key issues; (10) disagreement remains, however, regarding the pace and direction of negotiations; (11) the United States and most other countries favor immediate negotiations on all issues; (12) in contrast, Mercosur proposes that negotiations on certain issues such as market access, which is a priority for the United States, be delayed until 2003; (13) following the Miami Summit, the 1995 Mexican financial crisis raised concerns in the United States about pursuing further regional trade liberalization efforts; (14) in the meantime, other countries have moved forward with their own trade liberalizations efforts; (15) Mercosur has strengthened its position, concluding free trade arrangements with Chile and Bolivia, and is beginning trade negotiations with Mexico and the European Union; (16) these agreements have created disadvantages for some U.S. exporters' access to markets in the region; and (17) representatives of several countries in the region generally agree that their countries will continue to advance their own regional free trade initiatives regardless of whether the United States participates in further regional trade liberation. |
The Resource Conservation and Recovery Act (RCRA), passed in 1976 and substantially amended in 1984, establishes a national policy that hazardous waste be generated, treated, stored, and disposed of so as to minimize present and future threats to human health and the environment. RCRA, among other things, governs the management of hazardous waste from its generation to its final disposal so as to prevent future contamination. According to many stakeholders, the law has accomplished this purpose. RCRA also contains provisions governing the identification and listing of hazardous waste. Under these provisions, EPA has established criteria for identifying waste that should be classified as hazardous. For example, EPA has listed in its regulations specific types of waste that are to be considered hazardous. Some types are listed by their source, that is, by the specific industrial processes that produce the waste, such as electroplating, which generates sludge from wastewater treatment. Other types are defined by certain characteristics that make the waste hazardous, such as whether it ignites easily. RCRA’s regulations govern all hazardous waste, regardless of where or how it is generated. Waste from both current and past industrial operations is regulated. The requirements apply to any waste that EPA has identified as hazardous or, under its “contained-in” policy, to any environmental medium, such as soil or groundwater, that has been mingled with an identified hazardous waste until the medium no longer contains the waste. As a result, waste associated with cleanups (often referred to as remediation waste) must be managed under RCRA if it contains a hazardous component. Thus, waste generated at a wide variety of cleanups, including those under RCRA, Superfund, and state enforcement and voluntary programs, must generally be managed under RCRA’s stringent requirements. Both the Congress and EPA have considered proposals to amend the application of RCRA’s requirements to remediation waste. Since 1995, several legislative proposals have been introduced that would exempt certain types of remediation waste from these requirements and give the states the authority to establish their own requirements for managing this waste. Likewise, in 1995, the administration, as part of its effort to reinvent government, tasked EPA with identifying for statutory reform any RCRA provision whose implementation incurred costs that far outweighed the environmental benefits achieved. Through meetings with stakeholders, EPA identified RCRA remediation waste as a key area. In April 1996, EPA proposed a comprehensive rule that would have provided alternative ways of managing remediation waste. However, in September 1997, the agency announced plans to withdraw its proposed rule because, among other things, stakeholders disagreed on many remediation waste issues. Instead, the agency plans to issue regulations covering four specific elements affecting remediation waste. To respond to this report’s objectives, we reviewed pertinent laws and regulations and EPA’s policies, guidance documents, and proposed regulations that discuss the application of RCRA’s requirements to the management of remediation waste during cleanups. We interviewed EPA headquarters managers responsible for both developing and implementing RCRA policy. We also interviewed officials in nine states who are responsible for administering the federal RCRA and Superfund programs and their own state enforcement or voluntary cleanup programs. We selected five of these states because they have the largest cleanup workloads and four additional states to achieve geographic diversity. Finally, we discussed the current requirements for managing remediation waste with various industry and environmental associations. (See app. I for a more detailed statement of our scope and methodology.) While many of RCRA’s requirements can negatively affect cleanups, according to EPA, cleanup managers most often cited three requirements as creating disincentives for industry to clean up previously contaminated sites. They believe that these requirements increase the cost and time of some cleanups and lead parties to select cleanup remedies that can be either too stringent or not stringent enough, given the health and environmental risks posed by the waste. Ultimately, these requirements can discourage the cleanup of some sites, particularly of sites being managed under state voluntary programs. Most of the cleanup managers we contacted identified land disposal restrictions, minimum technological requirements, and requirements for permits as the three most significant requirements under RCRA that unnecessarily add cost and time to some cleanups. The land disposal restrictions and minimum technological requirements primarily add costs because they set stringent standards for treating and disposing of hazardous waste, forcing parties to try to reduce contamination to concentrations that they believe are lower than necessary to be protective or to use cleanup technologies that were not designed to manage remediation waste. The requirements for permits can add time—months or even years—and costs to some cleanups. For example, one EPA estimate suggests that exempting contaminated soil at a Superfund site from these requirements could reduce the treatment costs by nearly 80 percent, from an average of about $341 per ton to an average of about $73 per ton. This exemption could reduce the overall treatment and disposal costs for such a site from about $12.2 million to about $4.1 million. Ultimately, applying the three requirements to remediation waste has led parties to base their choice of some cleanup remedies not on the risks posed by the waste, but on considerations of how to meet, minimize, or avoid the requirements, according to EPA and state cleanup officials. As a result, they pointed out, parties often choose less aggressive remedies, such as leaving remediation waste in place rather than managing or treating it. The 1984 RCRA amendments created land disposal restrictions that largely prohibit parties from disposing of hazardous waste on land (e.g., in a landfill unless they have treated the waste to minimize threats to human health and the environment. The law also requires EPA to establish treatment standards for hazardous waste that has been restricted from land disposal. Once EPA has set a treatment standard, parties must meet it for hazardous waste that they subsequently dispose of on land. Parties do not have to meet the treatment standard for hazardous waste placed on land before EPA established the standard unless they remove the waste and dispose of it again—for example, during a cleanup action. Complying with the land disposal restrictions and their associated treatment standards can be costly and complex for several reasons. First, the restrictions are costly to implement because they require that waste be treated to specific, stringent standards. Such treatment is especially costly for cleanups involving large volumes of waste. Treatment to meet these stringent standards may be appropriate when relatively high-risk materials, such as concentrated hazardous waste from old lagoons and landfills, are found during cleanups. However, much remediation waste is lightly contaminated. When relatively low-risk media are found, treatment to meet the standards may be more stringent than necessary to protect human health and the environment, according to EPA. EPA estimated that exempting relatively low-risk contaminated media from the treatment standards under the land disposal restrictions could reduce by about 80 percent the volume of contaminated media subject to these requirements, from about 8.1 to about 1.8 million tons per year. The agency also estimated that exempting relatively low-risk contaminated media could decrease cleanup costs nationwide by 50 percent, or about $1.2 billion per year, without sacrificing human health or environmental protection. Second, land disposal restrictions may drive some parties to use cleanup technologies that are more stringent and therefore more costly than necessary to be protective. Under RCRA, EPA is required to set treatment standards for hazardous waste that minimize any threats to human health and the environment. EPA has generally set its treatment standards at the concentration levels that could be attained if the best demonstrated available technology were used to treat the contamination. As a result, for some hazardous waste, the only way to achieve the standard is by incineration, even though other technologies, such as soil washing or bioremediation, can result in protective cleanups at a much lower cost.For example, incineration, which can typically address all the hazardous waste at a site, can cost as much as $1,200 per ton, according to EPA’s estimates. If the waste at a site can be treated to meet RCRA’s standards through a combination of other technologies, such as bioremediation, soil washing, and immobilization, each of which is effective for certain contaminants, the final cost is likely to be no more than about $300 per ton, according to EPA—much less than the cost of incineration. Finally, the land disposal restrictions and their associated treatment standards are costly because contamination may have come from a variety of sources or industrial processes that occurred at the site over time, and parties may have to use several treatment technologies to comply with all of the applicable standards. According to EPA, this issue is particularly relevant at sites with a long history of contamination. The issue was also raised by a cleanup manager from New Jersey, one of the five states with the largest volume of remediation waste. He said that remediation waste frequently contains mixtures of many types of waste and parties find it difficult to design treatment methods that will satisfy all of the applicable standards under the land disposal restrictions. EPA has acknowledged that its treatment standards under RCRA are not generally appropriate for much of the contaminated soil typically found at cleanups. However, even though EPA believes that in most cases, such soil would be more appropriately treated using other technologies, such as bioremediation, it does not have the research to demonstrate that these technologies can attain the stringent treatment levels required by RCRA. Some of the state cleanup managers we interviewed also discussed the problems they had encountered in treating soil to achieve the standards. New York officials, for example, told us that the owners of a site with soil contaminated with metals wanted to use a cleanup technology at the site that would have achieved 98 percent of the concentration level specified by the pertinent RCRA treatment standards. However, because the technology did not fully comply with the treatment standards, the owners instead had to excavate the waste and send it to a hazardous waste facility for treatment and disposal. Alternatively, efforts to avoid triggering the treatment standards under the land disposal restrictions can drive parties to use less aggressive and perhaps less effective cleanup methods, such as leaving contaminated soil in place and placing a waterproof cover over it rather than treating it. While most cleanup programs allow such remedies on a case-by-case basis, EPA believes they are not as protective over the long term as more aggressive remedies, such as excavating the waste to treat it. RCRA also establishes design and operating specifications, known as minimum technological requirements, for facilities, such as incinerators and landfills, that either treat or dispose of hazardous waste. For example, a hazardous waste landfill or surface impoundment must have (1) two or more liners, (2) a leachate collection system, and (3) a monitoring system to ensure that contamination is not moving into the groundwater. Complying with these requirements can be expensive. For example, one facility we visited spent $750,000 in 1987 to meet the minimum technological requirements for a 2.5-acre surface impoundment. Because these technological requirements were designed for facilities that manage waste from ongoing industrial operations (called process waste), they may be more stringent than necessary for some remediation waste, according to EPA and the majority of the state cleanup managers we interviewed. For example, a temporary waste pile must meet the same requirements as a pile where hazardous waste will be treated or stored for many years. As a result, these requirements can be counterproductive for some cleanups and unnecessarily increase their costs, according to EPA, most state officials, and the industry representatives we interviewed. Disposing of remediation waste, particularly lower-risk waste, in accordance with the minimum technological requirements may add unnecessary costs. For example, parties that want to dispose of waste that has already been treated to meet land disposal requirements must still use a landfill that meets the minimum technological requirements. EPA and several state cleanup officials we interviewed were doubtful that compliance with these requirements would be worth the cost, given the low level of risk that treated remediation waste poses. According to EPA, disposing of waste in a hazardous waste landfill can cost $200 per ton, compared with $50 per ton to dispose of it in a municipal or industrial landfill. Thus, for the average Superfund site with 34,000 tons of contaminated soil, it would cost about $6.8 million to dispose of the treated soil in a landfill that meets these technological requirements, compared with about $1.7 million to dispose of it in a municipal or industrial landfill. RCRA generally prohibits the treatment, storage, or disposal of hazardous waste without a permit. Because the process of obtaining a permit involves a step-by-step approach with substantial requirements for documentation and review, obtaining a permit can increase cleanup costs and cause delays. In addition, under RCRA, facilities that require a permit in order to clean up a portion of a site must also address cleanup requirements for the entire site. Consequently, parties may try to avoid triggering the permit requirement. The administrative cost of obtaining a RCRA permit can range from $80,000 for an on-site treatment unit, such as a tank, to $400,000 for an on-site incinerator, and up to $1 million for a landfill, according to EPA’s estimates. In addition to these costs, a party may incur other costs for tasks needed to obtain a permit, such as assessing a site’s conditions in order to design a groundwater monitoring system or conducting emissions testing and trial burns for an incinerator. The time required to obtain a permit can also be extensive, according to almost all of the state cleanup managers we interviewed. For example, Texas managers said that getting a permit can take 7 to 9 months for a simple treatment unit, such as a tank, and an additional 5 to 6 years for a more complicated unit, such as a landfill. Industry representatives we spoke with also estimated that getting a RCRA permit typically takes 5 to 6 years. In a 1990 analysis of RCRA, EPA reported that the permit process is cumbersome and causes significant delays. EPA and several state cleanup managers indicated that these costs, delays, and administrative issues are particularly significant for facilities that are not in the business of transporting, storing, or disposing of hazardous waste. Such facilities would not need a RCRA permit were it not for their cleanup activities. Even facilities that already have a RCRA permit to operate encounter costs and delays when trying to get EPA or the state to modify their permit to conduct cleanup activities. EPA’s most recent estimate (1992) of the cost to modify an existing permit is about $80,000. Washington State cleanup managers said that they have been working on a permit modification for one site for 2 years. They find that under RCRA, facilities have to request a permit modification for every technical change, whereas under other programs, such as their state enforcement program, the regulators and cleanup parties can meet and negotiate changes to cleanup plans. To avoid these problems, parties sometimes opt to send their remediation waste off-site to a commercial facility that already has a RCRA permit to treat, store, or dispose of hazardous waste; however, this option can be prohibitively expensive, according to EPA and some state cleanup managers. For example, Maine does not have any such commercial facilities; therefore, parties that want to send their waste off-site have to pay high transportation costs to ship it to another state that does. To avoid triggering RCRA’s requirements, property owners whose sites are not under a federal or state cleanup order may choose to let the waste remain in place without treatment and purchase land elsewhere for their plant expansion or other needs, according to EPA, as well as many state cleanup officials and industry representatives. EPA managers told us that leaving waste in place—especially “old waste,” such as sludge, that may still have relatively high concentrations of hazardous substances—may pose health or environmental risks. Furthermore, some state cleanup managers noted, the contaminated land is not placed back into productive use. Although cleaning up a site may offer economic benefits, such as relief from liability for contamination and increased property values, industry sometimes concludes that the costs of complying with RCRA can outweigh these benefits, according to EPA’s analysis. Cleanup program managers from several states echoed these concerns. For example, cleanup managers from Missouri believe that less restrictive requirements for remediation waste would lead to more voluntary cleanups. Officials from Pennsylvania concurred, saying that they believe RCRA’s requirements discourage parties from voluntarily stepping forward to clean sites, such as former steel mill sites near Pittsburgh. Likewise, cleanup managers from New York believe that economic factors are key to determining whether a voluntary cleanup will occur. If a property’s sale price or redevelopment value does not allow a party to recoup the expenses of complying with RCRA, such a cleanup will not take place, they contend. Illinois cleanup managers expressed similar concerns, saying that potential buyers are likely to lose interest in purchasing a property once they find out that it may be subject to RCRA’s requirements, especially the treatment standards under the land disposal restrictions. Since the late 1980s, EPA has incrementally modified RCRA’s application to remediation waste through an assortment of policy statements and regulatory alternatives, which have lessened but not solved the adverse effects identified. The state managers we interviewed have had varied experience in using these alternatives; some have found them burdensome and overly complicated. Furthermore, industry representatives were concerned that using the alternatives may result in cleanups that do not meet RCRA’s requirements and will thus require further action. To allay these concerns, in 1996, EPA proposed new rules to more comprehensively reform RCRA’s requirements as they apply to remediation waste. However, because technical and legal issues associated with the proposed rule remain unresolved, the reform of RCRA’s requirements that impede cleanups can best be addressed through legislation, according to EPA. The states have most frequently used six policy and regulatory alternatives that EPA has issued. Each alternative varies, however, in the degree to which it helps to solve the problems posed by RCRA’s requirements. EPA originally designed the “contained-in” policy in 1986 to clarify that the scope of the waste managed under RCRA includes any medium—for example, groundwater or soils—that contained a listed waste. In the 1990s, recognizing that at some concentration levels, contaminated media no longer pose a hazard to health or the environment, EPA has allowed its regions and states to exclude, or “contain out,” such media from RCRA’s regulation, on a case-by-case basis. EPA has not established definitive guidance on the specific concentration levels that justify a “contained-out” decision, but it has stated that the decision should be based on the risk posed to human health. Hence, according to EPA, this policy allows regulatory agencies to make their own decisions about when contaminated media no longer contain hazardous waste and therefore no longer need to be managed under RCRA. However, EPA has also reported that while the contained-out policy has increased flexibility and reduced cleanup delays, it has not been consistently applied throughout the nation. In addition, the policy applies only to contaminated media—soil and groundwater—and not to all remediation waste, such as sludge. Furthermore, in some cases, not all waste that has been contained out is exempt from all of RCRA’s requirements. For example, contaminated soil may still be subject to land disposal requirements if it was excavated and tested in order to obtain the contained-out decision. Finally, managers from one state told us they are reluctant to use this policy because EPA has not set national standards for making a contained-out decision. A 1986 amendment to the Superfund law exempts on-site cleanups from the requirement to obtain a RCRA permit because these cleanups receive close federal and state oversight. Some states have likewise adopted this waiver for the on-site cleanups they oversee under their own enforcement programs. Nevertheless, these cleanups must continue to meet RCRA’s other requirements, including the land disposal restrictions and minimum technological requirements. Permit waivers do not apply to RCRA or state voluntary cleanups. In 1988, EPA issued a regulation to help address problems in meeting the land disposal treatment standards for specific types of waste, such as contaminated soils. The regulation allows EPA to issue a site-specific variance from a given land disposal treatment standard under certain circumstances, such as when a given waste cannot be treated to the applicable concentration level. However, according to the Superfund program managers, the lengthy approval process, which includes obtaining public comments, discourages requests for these variances. Nonetheless, EPA has recently encouraged the regions to make greater use of the variances. In 1990, EPA established this policy for Superfund cleanups, and the states have extended it to cleanups in other programs. When beginning a cleanup, a party must make a good-faith effort to determine the source of the waste identified at the site. The source often determines whether the waste is a listed hazardous waste and, therefore, subject to RCRA’s requirements. The Superfund guidance provides that when no records exist to document the exact source of the waste—a common occurrence for older, abandoned Superfund sites—the lead regulatory agency can presume that the waste is not a listed hazardous waste and is therefore not subject to RCRA’s requirements. However, the parties conducting the cleanups are at risk if they have not taken adequate steps to identify the source of the waste. If additional information becomes available to prove that, because of its source, a waste is a listed hazardous waste, the responsible party could be forced by EPA to perform additional cleanup activities at the site in accordance with RCRA’s requirements. In this case, the responsible party could face liability for improperly managing and disposing of hazardous waste. Also originating within Superfund in 1990, this interpretation of the scope of land disposal restrictions allows cleanup managers to consolidate some remediation waste and treat it or leave it in place and cap it without triggering the treatment standards under the land disposal restrictions. However, the waste can be consolidated only if it lies within contiguous areas of contamination. In addition, cleanup managers must comply with all of RCRA’s requirements if the waste is moved from one area of contamination to another or is removed, treated, and then placed back into the area of contamination. In 1993, EPA issued the corrective action management unit (CAMU) rule that significantly expands upon the area of contamination policy. According to EPA officials, under this rule, parties conducting cleanups can dig up or move waste or can permanently treat, store, or dispose of it within a strictly defined area on-site if certain site-specific design and operating requirements are met. However, the waste would not be subject to RCRA’s land disposal restrictions or minimum technological requirements. Moreover, parties must obtain EPA’s approval to use a CAMU—usually by obtaining a permit. The use of CAMUs has been somewhat limited because in 1993, some stakeholders, including the Environmental Defense Fund (EDF), filed a lawsuit questioning, among other things, whether EPA has the authority to exempt hazardous waste disposed of in CAMUs from the land disposal restrictions and the minimum technological requirements. This legal question has not yet been resolved. While most of the state managers we interviewed described these alternatives, such as the CAMU rule, as useful during cleanups, some managers were not aware of or did not understand all of the alternatives, questioned whether they were legally defensible, or found them burdensome and inefficient. EPA is considering how to address these problems. Cleanup managers from all but one of the states we selected told us that they had used EPA’s alternatives for minimizing the impact of RCRA’s requirements on remediation waste cleanups. Generally, the state and other managers believed that the alternatives brought needed flexibility to RCRA’s rigid requirements. For example, the Department of Defense’s Deputy Under Secretary for Environmental Security attributed savings of between $500 million and $1 billion in cleanup costs to the use of a CAMU at the Department’s Rocky Mountain Arsenal site. However, those managers who had used the alternatives more extensively also said that they spend considerable time and resources to determine which alternatives to use and how to use them to work around the problems presented by RCRA’s requirements. They found that the alternatives were difficult to use and did not solve all of the problems at a particular site. In some instances, we found that cleanup managers were unfamiliar with some of the alternatives or were concerned about using them. For example, cleanup managers from one state told us that they were not familiar with EPA’s policy that provides for waivers to the administrative requirements for obtaining a permit. Managers from another state told us that they were reluctant to make use of the contained-out policy because EPA had not issued specific guidance on such determinations. Industry managers told us they were hesitant to propose new CAMUs because of the rule’s uncertain future. Several industry and state cleanup managers acknowledged that they are somewhat uncomfortable applying these alternatives for fear that EPA or a third party may view the cleanup as not being in full compliance with RCRA’s requirements and may initiate a legal challenge. For example, managers in one state were somewhat uncomfortable that they take full advantage of the flexibility provided by the source of contamination presumption. In the managers’ view, the state may not be requiring an extensive enough search to determine the source of the waste. Several EPA headquarters managers said that they are not surprised that state cleanup managers are unaware of or are inconsistently applying the alternative policies because the policies are difficult to understand and have been implemented piecemeal over the years. The EPA managers acknowledged that they may need to take additional steps to help the regions and states better use these options. Recognizing the need for more comprehensive reform of RCRA’s requirements for managing remediation waste, EPA in 1993 established a formal advisory committee of key stakeholders that developed the framework for a new regulatory approach that EPA proposed in April 1996, the Hazardous Waste Identification Rule for Contaminated Media (HWIR-Media). This proposal laid out several options that range from exempting some remediation waste from RCRA’s current requirements to exempting all such waste and giving the states the authority to define how to manage it. EPA estimated that these options could save parties conducting cleanups up to $2.1 billion in cleanup costs a year over the next few years. However, stakeholders still have significant disagreements over legal and technical issues. Therefore, EPA anticipates that any approach to comprehensive regulatory reform would result in prolonged legal battles that would delay cleanups. As result, the agency announced plans to withdraw its proposed rule and focus on four more narrow regulatory changes. EPA concluded that comprehensive reform can best be achieved by revising RCRA itself. EPA’s proposed rule laid out alternatives for waste management, ranging from the “bright line” to the “unitary” approach. The first was limited to making only contaminated media eligible for an exemption from RCRA’s stringent requirements while maintaining the requirements for more highly contaminated hazardous waste. To determine which media could be exempt, EPA would establish a concentration level, or “bright line,” for various contaminants. If the contaminants in a medium fall below the bright line, the medium would be eligible for an exemption from RCRA’s current hazardous waste management requirements and EPA and authorized states would have the authority to set site-specific waste management requirements. EPA estimates that about 80 percent of all contaminated media would be eligible for a RCRA exemption under this approach, saving $1.2 billion a year in cleanup costs over the next few years. In contrast, the unitary approach would exempt all remediation waste, including debris and sludge, from RCRA’s hazardous waste management requirements. Remediation waste would then be managed under a site-specific remediation plan which would be subject to public review and comment and approval by EPA or an authorized state. EPA estimated that this approach could save approximately $2.1 billion a year in cleanup costs over the next few years. According to the Association of State and Territorial Waste Management Officials, most states would prefer an approach that includes all remediation waste—similar to the unitary approach—because it would allow for efficient cleanups. Representatives from the departments of Defense and Energy, industry, and several associations that we contacted also said they would generally prefer the unitary approach for the same reason. Industry groups, in their comments on EPA’s proposal, raised concerns about the bright-line approach, particularly about the extent to which they would have to test and sample waste to determine whether each contaminant at a facility exceeds the line, potentially making some cleanups cost-prohibitive. Some of EPA’s program managers also said that if all remediation waste is not exempted from RCRA’s current requirements, the incentives to avoid cleanups or select less aggressive remedies will continue. Other stakeholders, including representatives of EDF, would generally prefer an approach that is conceptually similar to the bright-line approach. For example, EDF, in its comments on EPA’s proposed rule, stated that it strongly objects to any rule that does not provide national treatment standards for highly contaminated media. EDF contends that, in most cases, this material is as toxic as the process waste that is subject to RCRA’s requirements and therefore should be managed rigorously. EDF also asserts that EPA lacks any technical basis for setting different treatment standards for sludge managed during cleanups. EDF believes that there is no evidence that the sludge managed during a cleanup is physically or chemically different from process waste. Therefore, EDF is opposed to relaxing RCRA’s requirements for managing sludge. EDF was also critical of EPA’s methodology for establishing bright lines, stating that the agency did not adequately consider potential exposure to contaminated groundwater. Stakeholders also disagree on the extent to which the states should be authorized to manage remediation waste. Some stakeholders expressed concern that the states, if authorized, could set different standards for managing such waste, potentially creating problems with interstate transfer and disposal. Cleanup managers in one state were particularly concerned about whether they would have adequate resources to determine the hazard posed by waste shipped to their state from states with less stringent standards. Disagreements also arose on the process that should be used to determine whether a state has adequate laws, standards, and programs to manage exempted waste. Some stakeholders argue that the states have already demonstrated their ability to manage remediation waste through their state cleanup programs and should be allowed to certify themselves as authorized to do so. EDF, on the other hand, points out that since a large portion of remediation waste would be exempt from RCRA’s hazardous waste management requirements, the states could use their own systems for managing nonhazardous waste, such as municipal and industrial landfills, for remediation waste. EDF argues that some evaluations have raised questions about the adequacy of these state systems. EPA enforcement managers also added that community groups have expressed similar concerns. If EPA is to implement a state authorization process, all stakeholders seem to agree that the agency should not duplicate the process EPA uses to authorize states to implement RCRA because it is cumbersome and time-consuming. However, the stakeholders disagree on how to streamline the process so that EPA retains meaningful oversight and the public has adequate opportunities to participate in cleanup decisions and activities. EPA concluded that resolving all the technical and legal issues, including how to distinguish what waste poses a significant threat to human health and the environment and whether EPA can exempt this waste from RCRA’s land disposal restrictions, would be time-consuming and resource-intensive. The agency expected the resulting drawn out litigation and uncertainty would further discourage cleanups. Subsequently, the agency announced on September 11, 1997, that it plans to withdraw the HWIR-Media rule and, instead, pursue final rulemaking on four more narrow portions of the proposal by June 1998. The agency acknowledges that while these changes would help improve remediation waste management, they would not provide the needed flexibility to exempt such waste from RCRA’s rules. Therefore, EPA further concluded that comprehensive reform of the remediation waste issue can be best addressed through the legislative process. In anticipation that legislative proposals to address the issue could be reintroduced, EPA, in conjunction with the Council on Environmental Quality, hosted three meetings during the past year to assess stakeholders’ views on outstanding remediation waste issues and determine possible ways to address them. Three of RCRA’s hazardous waste management requirements, in particular—land disposal restrictions, minimum technological requirements, and requirements for permits—may be unduly stringent for a significant portion of the remediation waste that poses a lesser risk to human health and the environment. While stakeholders generally agree that comprehensive reform of remediation waste management is necessary, not everyone agrees on how to achieve this reform. EPA’s efforts to provide alternative policies to mitigate the impact of these requirements have resulted in confusion over the applicability of the policies to cleanups and some, such as the CAMU rule, have been legally challenged. EPA has concluded that because stakeholders disagree on the extent to which waste should be exempt from RCRA’s requirements, as well as on EPA’s legal authority under current law to exempt waste from the requirements, the agency could not easily achieve comprehensive reform through the regulatory process. It believes that such reform can best be achieved by revising the underlying law governing remediation waste management. EPA’s plan to withdraw proposed comprehensive regulatory reform increases the need for a legislative solution. We recommend that until comprehensive legislative reform is achieved to address RCRA’s disincentives to cleanups, the Administrator, EPA, take steps to ensure that regulators overseeing cleanups have a more consistent understanding of how to apply EPA’s existing policy and regulatory alternatives to RCRA’s requirements for managing remediation waste. These steps could include, for example, consolidating the policy and regulatory alternatives into one guidance document, training all cleanup managers in its appropriate use, and providing follow-up legal assistance for site-specific implementation questions. We provided copies of a draft of this report to EPA for its review and comment. We met with agency officials, including the Acting Director, Permits and State Programs Division, Office of Solid Waste, the division with responsibility for developing policies and procedures for managing remediation waste under RCRA. The agency generally agreed with the report’s findings. EPA suggested some technical revisions to the report, which we incorporated. The agency also identified two issues it believed needed further clarification. First, EPA agreed that we identified the three specific requirements under RCRA that, when applied to remediation waste, pose the most significant barriers to cleanups. However, the agency noted that reforming these individual requirements would not remove all of the barriers; RCRA’s entire hazardous waste management process, as it applies to remediation waste, poses problems and needs comprehensive reform. Second, the agency wanted to make sure that the report clearly indicated that RCRA’s requirements affect all remediation waste, including sludge, debris, and contaminated soil. EPA believes that reform must apply to all remediation waste. We made several changes in the report where appropriate to address these issues. Finally, while agreeing that our recommendation will help parties manage cleanups under RCRA’s current requirements, EPA believes that the benefits may be limited because the requirements will continue to pose barriers to cleanups until comprehensive reform is achieved. We reemphasized that reform, while necessary, may take some time to implement. Meanwhile, parties will have to accomplish cleanups under RCRA’s current requirements and should be able to take advantage of the policy and regulatory alternatives EPA has provided. However, given the concerns that state and industry cleanup managers have expressed about using these alternatives, we believe it is important that EPA take steps to ensure the alternatives are implemented correctly. The scope and methodology used for our work is discussed in appendix I. We performed our work from April through September 1997 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to the appropriate congressional committees; the Administrator, EPA; and other interested parties. We will also make copies available to others on request. We hope this information will assist you as you consider legislation to reform RCRA as it applies to remediation waste. If you have any further questions, please call me at (202) 512-6111. Major contributors to this report are listed in appendix II. To provide information on the requirements of the Resource Conservation and Recovery Act (RCRA) that pose barriers to managing remediation waste and the policies that the Environmental Protection Agency (EPA) has developed to mitigate those barriers, we reviewed applicable laws and numerous EPA documents, policies, and regulations. We also interviewed managers in charge of hazardous waste cleanup programs in EPA, nine states, and industry to obtain their views both on RCRA’s requirements and on the actions EPA has taken to mitigate barriers presented by the requirements. We attended all three meetings co-sponsored by EPA and the Council on Environmental Quality to assess stakeholders’ concerns with reforming RCRA’s requirements for remediation waste; these meetings were held on June 5, August 6, and September 5, 1997. Additionally, we spoke with cleanup program managers in several other federal agencies and representatives of the primary environmental association involved in remediation waste issues to learn about their experiences and perspectives. Finally, we visited a hazardous waste facility at Cytec Industries’ Willow Island plant near Parkersburg, West Virginia. The officials and representatives we interviewed include the following: The Acting Director and environmental specialists from the Permits and State Programs Division, Office of Solid Waste. This division is responsible for developing environmental remediation policies and procedures under RCRA. Environmental specialists from the Office of Site Remediation Enforcement who oversee EPA’s enforcement of RCRA. Representatives from the Superfund program who specialize in complying with RCRA’s applicable requirements. Region III officials who manage hazardous waste activities at Cytec Industries’ Willow Island plant near Parkersburg, West Virginia. Program managers responsible for overseeing hazardous waste cleanups at the departments of Defense, Energy, and the Interior. A policy director from the Association of State and Territorial Solid Waste Management Officials. Managers of Superfund, RCRA, state enforcement, and voluntary cleanup programs in nine states. We selected five of these states—California, Illinois, New Jersey, New York, and Pennsylvania—because, according to EPA, they collectively generate, each year, about 35 percent of the nation’s contaminated environmental media managed off-site. We selected the four remaining states—Maine, Missouri, Texas, and Washington—for geographic diversity. Attorneys and consultants representing major corporate members of the National Environmental Development Association and the RCRA Corrective Action Project. These groups were organized to promote the reform of RCRA. Attorneys from the Environmental Technology Council. This group represents private waste managers. A spokesperson for the Solid Waste Association of North America. This group represents municipal landfill operators. Facility and corporate headquarters managers from Cytec Industries in charge of hazardous waste management activities at the Willow Island plant near Parkersburg, West Virginia. Attorneys from the Environmental Defense Fund. This organization is one of the primary environmental organizations taking an active position on various proposals to reform RCRA’s requirements for managing remediation waste. We performed our work from April through September 1997 in accordance with generally accepted government auditing standards. Richard P. Johnson, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on: (1) the ways, according to the Environmental Protection Agency (EPA) and selected state program managers and industry representatives, that the Resource Conservation and Recovery Act's (RCRA) requirements, when applied to waste from cleanups (often referred to as remediation waste), affect cleanups; and (2) the actions EPA has taken to address any impediments. GAO noted that: (1) three key requirements under RCRA that govern hazardous waste management--land disposal restrictions, minimum technological requirements, and requirements for permits--can have negative effects when they are applied to waste from cleanups; (2) the requirements have been successful at preventing further contamination from ongoing industrial operations, according to EPA cleanup managers; (3) however, when the requirements are applied to remediation waste, they can pose barriers to cleanups; (4) because much remediation waste does not pose a significant threat to human health and the environment, subjecting it to these three requirements in particular can compel parties to perform cleanups that are more stringent than EPA, the states, industry, or national environmental groups believe are necessary to address the level of risk; (5) consequently, EPA and state program managers and industry representatives maintain, parties often try to avoid triggering the requirements by containing waste in place or by abandoning cleanups entirely; (6) in the late 1980s, when establishing national Superfund guidance, EPA recognized that these three requirements would make some cleanups more difficult and began developing policy and regulatory alternatives to give parties more flexibility in dealing with the requirements; (7) however, these alternatives do not address all of the impediments to cleanups, and some state cleanup managers were not always aware of or did not fully understand the alternatives, while others found them cumbersome to use and inefficient; (8) industry representatives were also concerned that because of the ways that some states are using these alternatives, EPA or a third party may challenge whether the cleanup fully meets RCRA requirements; (9) to allay these concerns, in 1996, EPA proposed a new rule to comprehensively reform remediation waste requirements; (10) the rule included a range of options to exempt some or all remediation waste from hazardous waste management requirements and to give states more waste management authority; (11) EPA had estimated that these options could save up to $2.1 billion a year in cleanup costs; (12) however, EPA recently decided that because stakeholders disagree over whether the agency can exempt remediation waste from the requirements, the agency would face a prolonged legal battle over the new rule; and (13) although areas of disagreement may still need to be addressed, EPA has concluded that the best way to achieve comprehensive reform is to change the underlying cleanup law. |
In general, SCHIP funds are targeted to uninsured children in families whose incomes are too high to qualify for Medicaid but are at or below 200 percent of FPL. Recognizing the variability in state Medicaid programs, federal SCHIP law allows a state to cover children in families with incomes up to 200 percent of FPL or 50 percentage points above its existing Medicaid eligibility standard as of March 31, 1997. Additional flexibility regarding eligibility levels is available, however, as Medicaid and SCHIP provide some flexibility in how a state defines income for purposes of eligibility determinations. Congress appropriated approximately $40 billion over 10 years (from fiscal years 1998 through 2007) for distribution among states with approved SCHIP plans. Allocations to states are based on a formula that takes into account the number of low- income children in a state. In general, states that choose to expand Medicaid to enroll eligible children under SCHIP must follow Medicaid rules, while separate child health programs have additional flexibilities in benefits, cost-sharing, and other program elements. Under certain circumstances, states may also cover adults under SCHIP. SCHIP allotments to states are based on an allocation formula that uses (1) the number of children, which is expressed as a combination of two estimates—the number of low-income children without health insurance and the number of all low-income children, and (2) a factor representing state variation in health care costs. Under federal SCHIP law and subject to certain exceptions, states have 3 years to use each fiscal year’s allocation, after which any remaining funds are redistributed among the states that had used all of that fiscal year’s allocation. Federal law does not specify a redistribution formula but leaves it to the Secretary of Health and Human Services (HHS) to determine an appropriate procedure for redistribution of unused allocations. Absent congressional action, states are generally provided 1 year to spend any redistributed funds, after which time funds may revert to the U.S. Treasury. Each state’s SCHIP allotment is available as a federal match based on state expenditures. SCHIP offers a strong incentive for states to participate by providing an enhanced federal matching rate that is based on the federal matching rate for a state’s Medicaid program—for example, the federal government will reimburse at a 65 percent match under SCHIP for a state receiving a 50 percent match under Medicaid. There are different formulas for allocating funds to states, depending on the fiscal year. For fiscal years 1998 and 1999, the formula used estimates of the number of low-income uninsured children to allocate funds to states. For fiscal year 2000, the formula changed to include estimates of the total number of low-income children as well. SCHIP gives the states the choice of three design approaches: (1) a Medicaid expansion program, (2) a separate child health program with more flexible rules and increased financial control over expenditures, or (3) a combination program, which has both a Medicaid expansion program and a separate child health program. Initially, states had until September 30, 1998, to select a design approach, submit their SCHIP plans, and obtain HHS approval in order to qualify for their fiscal year 1998 allotment. With an approved state child health plan, a state could begin to enroll children and draw down its SCHIP funds. The design approach a state chooses has important financial and programmatic consequences, as shown below. Expenditures. In separate child health programs, federal matching funds cease after a state expends its allotment, and non-benefit-related expenses (for administration, direct services, and outreach) are limited to 10 percent of claims for services delivered to beneficiaries. In contrast, Medicaid expansion programs may continue to receive federal funds for benefits and for non-benefit-related expenses at the Medicaid matching rate after states exhaust their SCHIP allotments. Enrollment. Separate child health programs may establish separate eligibility rules and establish enrollment caps. In addition, a separate child health program may limit its own annual contribution, create waiting lists, or stop enrollment once the funds it budgeted for SCHIP are exhausted. A Medicaid expansion must follow Medicaid eligibility rules regarding income, residency, and disability status, and thus generally cannot limit enrollment. Benefits. Separate child health programs must use, for example, benchmark benefit standards that use specified private or public insurance plans as the basis for coverage. However, Medicaid—and therefore a Medicaid expansion—must provide coverage of all benefits available to the Medicaid population, including certain services for children. In particular, Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) requires states to cover treatments or stabilize conditions diagnosed during routine screenings—regardless of whether the benefit would otherwise be covered under the state’s Medicaid program. A separate child health program does not require EPSDT coverage. Beneficiary cost-sharing. Separate child health programs may impose limited cost-sharing—through premiums, copayments, or enrollment fees—for children in families with incomes above 150 percent of FPL up to 5 percent of family income annually. Since the Medicaid program did not previously allow cost-sharing for children, a Medicaid expansion program under SCHIP would have followed this rule. In general, states may cover adults under the SCHIP program under two key approaches. First, federal SCHIP law allows the purchase of coverage for adults in families with children eligible for SCHIP under a waiver if a state can show that it is cost-effective to do so and demonstrates that such coverage does not result in “crowd-out”—a phenomenon in which new public programs or expansions of existing public programs designed to extend coverage to the uninsured prompt some privately insured persons to drop their private coverage and take advantage of the expanded public subsidy. The cost- effectiveness test requires the states to demonstrate that covering both adults and children in a family under SCHIP is no more expensive than covering only the children. The states may also elect to cover children whose parents have access to employer-based or private health insurance coverage by using SCHIP funding to subsidize the cost. Second, under section 1115 of the Social Security Act, states may receive approval to waive certain Medicaid or SCHIP requirements or authorize Medicaid or SCHIP expenditures. The Secretary of Health and Human Services may approve waivers of statutory requirements or authorize expenditures in the case of experimental, pilot, or demonstration projects that are likely to promote program objectives. In August 2001, HHS indicated that it would allow states greater latitude in using section 1115 demonstration projects (or waivers) to modify their Medicaid and SCHIP programs and that it would expedite consideration of state proposals. One initiative, the Health Insurance Flexibility and Accountability Initiative (HIFA), focuses on proposals for covering more uninsured people while at the same time not raising program costs. States have received approval of section 1115 waivers that provide coverage of adults using SCHIP funding. SCHIP enrollment increased rapidly over the first years of the program, and has stabilized for the past several years. In 2005, the most recent year for which data are available, 4.0 million individuals were enrolled during the month of June, while the total enrollment count—which represents a cumulative count of individuals enrolled at any time during fiscal year 2005—was 6.1 million. Of these 6.1 million enrollees, 639,000 were adults. Because SCHIP requires that applicants are first screened for Medicaid eligibility, some states have experienced increases in their Medicaid programs as well, further contributing to public health insurance coverage of low-income children during this same period. Based on a 3-year average of 2003 through 2005 Current Population Survey (CPS) data, the percentage of uninsured children varied considerably by state, with a national average of 11.7 percent. SCHIP annual enrollment grew quickly from program inception through 2002 and then stabilized at about 4 million from 2003 through 2005, on the basis of a point-in-time enrollment count. Total enrollment, which counts individuals enrolled at any time during a particular fiscal year, showed a similar pattern of growth and was over 6 million as of June 2005 (see fig. 1). Generally, point-in-time enrollment is a subset of total enrollment, as it represents the number of individuals enrolled during a particular month. In contrast, total enrollment includes an unduplicated count of any individual enrolled at any time during the fiscal year; thus the data are cumulative, with new enrollments occurring monthly. Our prior work has shown that certain obstacles can prevent low-income families from enrolling their children into public programs such as Medicaid or SCHIP. Primary obstacles included families’ lack of knowledge about program availability and that, even when children were eligible to participate, complex eligibility rules and documentation requirements complicated the application process. During the early years of SCHIP program operation, we found that many states developed and deployed outreach strategies in an effort to overcome these enrollment barriers. Many states adopted innovative outreach strategies and simplified and streamlined their enrollment processes in order to reach as many eligible children as possible. Examples follow. States launched ambitious public education campaigns that included multimedia campaigns, direct mailings, and the widespread distribution of applications. To overcome the barrier of a long, complicated SCHIP eligibility determination process, states reduced verification and documentation requirements that exceeded federal requirements, shortened the length of applications, and used joint SCHIP-Medicaid applications. States also located eligibility workers in places other than welfare offices—schools, child care centers, churches, local tribal organizations, and Social Security offices—to help families with the initial processing of applications. States eased the process by which applicants reapplied for SCHIP at the end of their coverage period. For example, one state mailed families a summary of the information on their last application, and asked families to update any changes to the information. Because states must also screen for Medicaid eligibility before enrolling children into SCHIP, some states have noted increased enrollment in Medicaid as a result of SCHIP. For example, Alabama reported a net increase of approximately 121,000 children in Medicaid since its SCHIP program began in 1998. New York reported that, for fiscal year 2005, approximately 204,000 children were enrolled in Medicaid as a result of outreach activities, compared with 618,973 children enrolled in SCHIP. In contrast, not all states found that their Medicaid enrollment was significantly affected by SCHIP. For example, Idaho reported that a negligible number of children were found eligible for Medicaid as a result of outreach related to its SCHIP program. Maryland identified an increase of 0.2 percent between June 2004 and June 2005. Based on a 3-year average of 2003 through 2005 CPS data, the percentage of uninsured children varied considerably by state and had a national average of 11.7 percent. The percentage of uninsured children ranged from 5.6 percent in Vermont to 20.4 percent in Texas (see fig. 2). According to the Congressional Research Service (CRS) analysis of 2005 CPS data, the percentage of uninsured children was higher in the southern (13.7 percent) and western (13.8 percent) regions of the United States compared with children living in northeastern (8.5 percent) and midwestern (8.2 percent) regions. Nearly 40 percent of the nation’s uninsured children lived in three of the most populous states—California, Florida, and Texas—each of which had percentages of uninsured children above the national average. Variations across states in rates of uninsured children may be linked to a number of factors, including the availability of employer-sponsored coverage. We have previously reported that certain types of workers were less likely to have had access to employer-sponsored insurance and thus were more likely to be uninsured. In particular, those working part- time, for small firms, or in certain industries such as agriculture or construction, were among the most likely to be uninsured. Additionally, states with high uninsured rates and those with low rates often were distinct with regard to several characteristics. For example, states with higher than average uninsured rates tended to have higher unemployment and proportionally fewer employers offering coverage to their workers. Small employers—those with less than 10 employees—were much less likely to offer health insurance to their employees than larger employers. States’ SCHIP programs reflect the flexibility allowed in structuring approaches to providing health care coverage, including their choice among three program designs—Medicaid expansions, separate child health programs, and combination programs, which have both a Medicaid expansion and a separate child health program component. As of fiscal year 2005, 41 state SCHIP programs covered children in families whose incomes are up to 200 percent of FPL or higher, with 7 of the 41 states covering children in families whose incomes are at 300 percent of FPL or higher. States generally imposed some type of cost-sharing in their programs, with 39 states charging some combination of premiums, copayments, or enrollment fees, compared with 11 states that did not charge cost-sharing. Nine states reported operating premium assistance programs that use SCHIP funding to subsidize the cost of premiums for private health insurance coverage. As of February 2007, we identified 14 states with approved section 1115 waivers to cover adults, including parents and caretaker relatives, pregnant women, and, in some cases, childless adults. As of July 2006, of the 50 states currently operating SCHIP programs, 11 states had Medicaid expansion programs, 18 states had separate child health programs, and 21 states had a combination of both approaches (see fig. 3). When the states initially designed their SCHIP programs, 27 states opted for expansions to their Medicaid programs. Many of these initial Medicaid expansion programs served as “placeholders” for the state—that is, minimal expansions in Medicaid eligibility were used to guarantee the 1998 fiscal year SCHIP allocation while allowing time for the state to plan a separate child health program. Other initial Medicaid expansions— whether placeholders or part of a combination program—also accelerated the expansion of coverage for children aged 14 to 18 up to 100 percent of FPL, which states are already required to cover under federal Medicaid law. A state’s starting point for SCHIP eligibility is dependent upon the eligibility levels previously established in its Medicaid program. Under federal Medicaid law, all state Medicaid programs must cover children aged 5 and under if their family incomes are at or below 133 percent of FPL and children aged 6 through 18 if their family incomes are at or below 100 percent of FPL. Some states have chosen to cover children in families with higher income levels in their Medicaid programs. Each state’s starting point essentially creates a “corridor”—generally, SCHIP coverage begins where Medicaid ends and then continues upward, depending on each state’s eligibility policy. In fiscal year 2005, 41 states used SCHIP funding to cover children in families with incomes up to 200 percent of FPL or higher, including 7 states that covered children in families with incomes up to 300 percent of FPL or higher. In total, 27 states provided SCHIP coverage for children in families with incomes up to 200 percent of FPL, which was $38,700 for a family of four in 2005. Another 14 states covered children in families with incomes above 200 percent of FPL, with New Jersey reaching as high as 350 percent of FPL in its separate child health program. Finally, 9 states set SCHIP eligibility levels for children in families with incomes below 200 percent of FPL. For example, North Dakota covered children in its separate child health program up to 140 percent of FPL. (See fig. 4.) (See app. I for the SCHIP upper income eligibility levels by state, as a percentage of FPL.) Under federal SCHIP law, states with separate child health programs have the option of using different bases for establishing their benefit packages. Separate child health programs can choose to base their benefit packages on (1) one of several benchmarks specified in federal SCHIP law, such as the Federal Employees Health Benefits Program (FEHBP) or state employee coverage; (2) a benchmark-equivalent set of services, as defined under federal law; (3) coverage equivalent to state-funded child health programs in Florida, New York, or Pennsylvania; or (4) a benefit package approved by the Secretary of Health and Human Services (see table 1). In some cases, separate child health programs have changed their benefit packages, adding and removing benefits over time, as follows. In 2003, Texas discontinued dental services, hospice services, skilled nursing facilities coverage, tobacco cessation programs, vision services, and chiropractic services. In 2005, the state added many of these services (chiropractic services, hospice services, skilled nursing facilities, tobacco cessation services, and vision care) back into the SCHIP benefit package and increased coverage of mental health and substance abuse services. In January 2002, Utah changed its benefit structure for dental services, reducing coverage for preventive (cleanings, examinations, and x-rays) and emergency dental services in order to cover as many children as possible with limited funding. In September 2002, the dental benefit package was further restructured to include dental coverage for accidents, as well as fluoride treatments and sealants. In 2005, most states’ SCHIP programs required families to contribute to the cost of care with some kind of cost-sharing requirement. The two major types of cost-sharing—premiums and copayments—can have different behavioral effects on an individual’s participation in a health plan. Generally, premiums are seen as restricting entry into a program, whereas copayments affect the use of services within the program. There is research indicating that if cost-sharing is too high, or imposed on families whose income is too low, it can impede access to care and create financial burdens for families. In 2005, states’ annual SCHIP reports showed that 39 states had some type of cost-sharing—premiums, copayments, or enrollment fees—while 11 states reported no cost-sharing in their SCHIP programs. Overall, 16 states charged premiums and copayments, 14 states charged premiums only, and 9 states charged copayments only (see fig. 5). Cost-sharing occurred more frequently in the separate child health programs than in Medicaid expansion programs. For example, 8 states with Medicaid expansion programs had cost-sharing requirements, compared with 34 states operating separate child health program components. The amount of premiums charged varied considerably among the states that charged cost-sharing. For example, premiums ranged from $5.00 per family per month for children in families with incomes from 150 to 200 percent of FPL in Michigan to $117 per family per month for children in families with incomes from 300 to 350 percent of FPL in New Jersey. Federal SCHIP law prohibits states from imposing cost-sharing on SCHIP-eligible children that totals more than 5 percent of family income annually. In addition, cost-sharing for children may be imposed on the basis of family income. For example, we earlier reported that in 2003, Virginia SCHIP copayments for children in families with incomes from 133 percent to below 150 percent of FPL were $2 per physician visit or per prescription and $5 for services for children in families with higher incomes. In fiscal year 2005, nine states reported operating premium assistance programs (see table 2), but implementation remains a challenge. Enrollment in these programs varied across the states. For example, Louisiana reported having under 200 enrollees and Oregon reported having nearly 6,000 enrollees. To be eligible for SCHIP, a child must not be covered under any other health coverage program or have private health insurance. However, some uninsured children may live in families with access to employer-sponsored health insurance coverage. Therefore, states may choose to establish premium assistance programs, where the state uses SCHIP funds to contribute to health insurance premium payments. To the extent that such coverage is not equivalent to the states’ Medicaid or SCHIP level of benefits, including limited cost-sharing, states are required to pay for supplemental benefits and cost-sharing to make up this difference. Under certain section 1115 waivers, however, states have not been required to provide this supplemental coverage to participants. Several states reported facing challenges implementing their premium assistance programs. Louisiana, Massachusetts, New Jersey, and Virginia cited administration of the program as labor intensive. For example, Massachusetts noted that it is a challenge to maintain current information on program participants’ employment status, choice of health plan, and employer contributions, but such information is needed to ensure accurate premium payments. Two states—Rhode Island and Wisconsin—noted the challenges of operating premium assistance programs, given changes in employer-sponsored health plans and accompanying costs. For example, Rhode Island indicated that increases in premiums are being passed to employees, which makes it more difficult to meet cost-effectiveness tests applicable to the purchase of family coverage. States opting to cover adult populations using SCHIP funding may do so under an approved section 1115 waiver. As of February 2007, we identified 14 states with approved waivers to cover at least one of three categories of adults: parents of eligible Medicaid and SCHIP children, pregnant women, and childless adults. (See table 3.) The Deficit Reduction Act of 2005 (DRA), however, has prohibited the use of SCHIP funds to cover nonpregnant childless adults. Effective October 1, 2005, the Secretary of Health and Human Services may not approve new section 1115 waivers that use SCHIP funds for covering nonpregnant childless adults. However, waivers for covering these adults that were approved prior to this date are allowed to continue until the end of the waiver. Additionally, the Secretary may continue to approve section 1115 waivers that extend SCHIP coverage to pregnant adults, as well as parents and other caretaker relatives of children eligible for Medicaid or SCHIP. SCHIP program spending was low initially, as many states did not implement their programs or report expenditures until 1999 or later, but spending was much higher in the program’s later years and now threatens to exceed available funding. Beginning in fiscal year 2002, states together spent more federal dollars than they were allotted for the year and thus relied on the 3-year availability of SCHIP allotments or on redistributed SCHIP funds to cover additional expenditures. But as spending has grown, the pool of funds available for redistribution has shrunk. Some states consistently spent more than their allotted funds, while other states consistently spent less. Overall, 18 states were projected to have shortfalls—that is, they were expected to exhaust available funds, including current and prior-year allotments—in at least 1 year from 2005 through 2007. To cover projected shortfalls that several states faced, Congress appropriated an additional $283 million for fiscal year 2006. As of January 2007, 14 states are projected to exhaust their allotments in fiscal year 2007. SCHIP program spending began low, but by fiscal year 2002, states’ aggregate annual spending from their federal allotments exceeded their annual allotments. Spending was low in the program’s first 2 years because many states did not implement their programs or report expenditures until fiscal year 1999 or later. Combined federal and state spending was $180 million in 1998 and $1.3 billion in 1999. However, by the end of the program’s third fiscal year (2000), all 50 states and the District of Columbia had implemented their programs and were drawing down their federal allotments. Since fiscal year 2002, SCHIP spending has grown by an average of about 10 percent per year. (See fig. 6.) From fiscal year 1998 through 2001, annual federal SCHIP expenditures were well below annual allotments, ranging from 3 percent of allotments in fiscal year 1998 to 63 percent in fiscal year 2001. In fiscal year 2002, the states together spent more federal dollars than they were allotted for the year, in part because total allotments dropped from $4.25 billion in fiscal year 2001 to $3.12 billion in fiscal year 2002, marking the beginning of the so-called “SCHIP dip.” However, even after annual SCHIP appropriations increased in fiscal year 2005, expenditures continued to exceed allotments (see fig. 7). Generally, states were able to draw on unused funds from prior years’ allotments to cover expenditures incurred in a given year that were in excess of their allotment for that year, because, as discussed earlier, the federal SCHIP law gave states 3 years to spend each annual allotment. In certain circumstances, states also retained a portion of unused allotments. States that have outspent their annual allotments over the 3-year period of availability have also relied on redistributed SCHIP funds to cover excess expenditures. But as overall spending has grown, the pool of funds available for redistribution has shrunk from a high of $2.82 billion in unused funds from fiscal year 1999 to $0.17 billion in unused funds from fiscal year 2003. Meanwhile, the number of states eligible for redistributions has grown from 12 states in fiscal year 2001 to 40 states in fiscal year 2006. (See fig. 8.) Congress has acted on several occasions to change the way SCHIP funds are redistributed. In fiscal years 2000 and 2003, Congress amended statutory provisions for the redistribution and availability of unused SCHIP allotments from fiscal years 1998 through 2001, reducing the amounts available for redistribution and allowing states that had not exhausted their allotments by the end of the 3-year period of availability to retain some of these funds for additional years. Despite these steps, $1.4 billion in unused SCHIP funds reverted to the U.S. Treasury by the end of fiscal year 2005. Congress has also appropriated additional funds to cover states’ projected SCHIP program shortfalls. The DRA included a $283 million appropriation to cover projected shortfalls for fiscal year 2006. CMS divided these funds among 12 states as well as the territories. In the beginning of fiscal year 2007, Congress acted to redistribute unused SCHIP allotments from fiscal year 2004 to states projected to face shortfalls in fiscal year 2007. The National Institutes of Health Reform Act of 2006 makes these funds available to states in the order in which they experience shortfalls. In January 2007, CRS projected that although 14 states will face shortfalls, the $147 million in unused fiscal year 2004 allotments will be redistributed to the five states that are expected to experience shortfalls first. The NIH Reform Act also created a redistribution pool of funds by redirecting fiscal year 2005 allotments from states that at midyear (March 31, 2007) have more than twice the SCHIP funds they are projected to need for the year. Some states consistently spent more than their allotted funds, while other states consistently spent less. From fiscal years 2001 through 2006, 40 states spent their entire allotments at least once, thereby qualifying for redistributions of other states’ unused allotments; 11 states spent their entire allotments in at least 5 of the 6 years that funds were redistributed. Moreover, 18 states were projected to face shortfalls—that is, they were expected to exhaust available funds, including current and prior-year allotments—in at least 1 of the final 3 years of the program. (See fig. 9.) As of January 2007, 14 states were projected to exhaust their allotments in fiscal year 2007. When we compared the 18 states that were projected to have shortfalls with the 32 states that were not, we found that the shortfall states were more likely to have a Medicaid component to their SCHIP program, to have a SCHIP eligibility corridor broader than the median, and to cover adults in SCHIP under section 1115 waivers (see table 4). It is unclear, however, to what extent these characteristics contributed to states’ overall spending experiences with the program, as many other factors have also affected states’ program balances, including prior coverage of children under Medicaid, and SCHIP eligibility criteria, benefit packages, enrollment policies, outreach efforts, and payment rates to providers. In addition, we and others have noted that the formula for allocating funds to states has flaws that led to underestimates of the number of eligible children in some states and thus underfunding. Fifteen of the 18 shortfall states (83 percent) had Medicaid expansion programs or combination programs that included Medicaid expansions, which generally follow Medicaid rules, such as providing the full Medicaid benefit package and continuing to provide coverage to all eligible individuals even after the states’ SCHIP allotments are exhausted. The shortfall states tended to have a broader eligibility corridor in their SCHIP programs, indicating that, on average, the shortfall states covered children in SCHIP from lower income levels, from higher income levels, or both. For example, 33 percent of the shortfall states covered children in their SCHIP programs above 200 percent of FPL, compared with 25 percent of the nonshortfall states. Finally, 6 of the 18 shortfall states (33 percent) were covering adults in SCHIP under section 1115 waivers by the end of fiscal year 2006, compared with 6 of the 32 nonshortfall states (19 percent). On average, the shortfall states that covered adults began covering them earlier than nonshortfall states and enrolled a higher proportion of adults. At the end of fiscal year 2006, 12 states covered adults under section 1115 waivers using SCHIP funds. Five of these 12 states began covering adults before fiscal year 2003, and all 5 states faced shortfalls in at least 1 of the final 3 years of the program. In contrast, none of the 4 states that began covering adults with SCHIP funds in the period from fiscal year 2004 through 2006 faced shortfalls. On average, the shortfall states covered adults more than twice as long as nonshortfall states (5.1 years compared with 2.3 years by the end of fiscal year 2006). Shortfall states also enrolled a higher proportion of adults. Nine states, including six shortfall states, covered adults using SCHIP funds throughout fiscal year 2005. In these nine states, adults accounted for an average of 45 percent of total enrollment. However, in the shortfall states, the average proportion was more than twice as high as in nonshortfall states. Adults accounted for an average of 55 percent of enrollees in the shortfall states, compared with 24 percent in the nonshortfall states. (See table 5.) While analyses of states as a group reveal some broad characteristics of states’ programs, examining the experiences of individual states offers insights into other factors that have influenced states’ program balances. States themselves have offered a variety of reasons for shortfalls and surpluses. These examples, while not exhaustive, highlight additional factors that have shaped states’ financial circumstances under SCHIP. Inaccuracies in the CPS-based estimates on which states’ allotments were based. North Carolina, a shortfall state, offers a case in point. In 2004, the state had more low-income children enrolled in the program than CPS estimates indicated were eligible. To curb spending, North Carolina shifted children through age 5 from the state’s separate child health program to a Medicaid expansion, reduced provider payments, and limited enrollment growth. Annual funding levels that did not reflect enrollment growth. Iowa, another shortfall state, noted that annual allocations provided too many funds in the early years of the program and too few in the later years. Iowa did not use all its allocations in the first 4 years and thus the state’s funds were redistributed to other states. Subsequently, however, the state has faced shortfalls as its program matured. Impact of policies designed to curb or expand program growth. Some states have attempted to manage program growth through ongoing adjustments to program parameters and outreach efforts. For example, when Florida’s enrollment exceeded a predetermined target in 2003, the state implemented a waiting list and eliminated outreach funding. When enrollment began to decline, the state reinstituted open enrollment and outreach. Similarly, Texas⎯commensurate with its budget constraints and projected surpluses⎯has tightened and loosened eligibility requirements and limited and expanded benefits over time in order to manage enrollment and spending. Children without health insurance are at increased risk of foregoing routine medical and dental care, immunizations, treatment for injuries, and treatment for chronic illnesses. Yet, the states and the federal government face challenges in their efforts to continue to finance health care coverage for children. As health care consumes a growing share of state general fund or operating budgets, slowdowns in economic growth can affect states’ abilities—and efforts—to address the demand for public financing of health services. Moreover, without substantive programmatic or revenue changes, the federal government faces near- and long-term fiscal challenges as the U.S. population ages because spending for retirement and health care programs will grow dramatically. Given these circumstances, we would like to suggest several issues for consideration as Congress addresses the reauthorization of SCHIP. These include the following: Maintaining flexibility without compromising the goals of SCHIP. The federal-state SCHIP partnership has provided an important opportunity for innovation on the part of states for the overall benefit of children’s health. Providing three design choices for states—Medicaid expansions, separate child health programs, or a combination of both approaches—affords them the opportunity to focus on their own unique and specific priorities. For example, expansions of Medicaid offer Medicaid’s comprehensive benefits and administrative structures and ensure children’s coverage if states exhaust their SCHIP allotments. However, this entitlement status also increases financial risk to states. In contrast, SCHIP separate child health programs offer a “block grant” approach to covering children. As long as the states meet statutory requirements, they have the flexibility to structure coverage on an employer-based health plan model and can better control program spending than they can with a Medicaid expansion. However, flexibility within the SCHIP program, such as that available through section 1115 waivers, may also result in consequences that can run counter to SCHIP’s goal—covering children. For example, we identified 14 states that have authority to cover adults with their federal SCHIP funds, with several states covering more adults than children. States’ rationale is that covering low-income parents in public programs such as SCHIP and Medicaid increases the enrollment of eligible children as well, with the result that fewer children go uninsured. Federal SCHIP law provides that families may be covered only if such coverage is cost- effective; that is, covering families costs no more than covering the SCHIP- eligible children. We earlier reported that HHS had approved state proposals for section 1115 waivers to use SCHIP funds to cover parents of SCHIP- and Medicaid-eligible children without regard to cost- effectiveness. We also reported that HHS approved state proposals for section 1115 waivers to use SCHIP funds to cover childless adults, which in our view was inconsistent with federal SCHIP law and allowed SCHIP funds to be diverted from the needs of low-income children. We suggested that Congress consider amending the SCHIP statute to specify that SCHIP funds were not available to provide health insurance coverage for childless adults. Under the DRA, Congress prohibited the Secretary of Health and Human Services from approving any new section 1115 waivers to cover nonpregnant childless adults after October 1, 2005, but allowed waivers approved prior to that date to continue. It is important to consider the implications of states’ use of allowable flexibility for other aspects of their programs. For example, what assurances exist that SCHIP funds are being spent in the most cost- effective manner, as required under federal law? In view of current federal fiscal constraints, to what extent should SCHIP funds be available for adult coverage? How has states’ use of available flexibility to establish expanded financial eligibility categories and covered populations affected their ability to operate their SCHIP programs within the original allotments provided to them? Considering the federal financing strategy, including the financial sustainability of public commitments. As SCHIP programs have matured, states’ spending experience can help inform future federal financing decisions. CRS testified in July 2006 that 40 states were now spending more annually than they received in their annual original SCHIP allotments. While many of them did not face shortfalls in 2006 because of available prior-year balances, redistributed funds, and the supplemental DRA appropriation, 14 states are currently projected to face shortfalls in 2007. With the pool of funds available for redistribution virtually exhausted, the continued potential for funding shortfalls for many states raises some fundamental questions about SCHIP financing. If SCHIP is indeed a capped grant program, to what extent does the federal government have a responsibility to address shortfalls in individual states, especially those that have chosen to expand their programs beyond certain parameters? In contrast, if the policy goal is to ensure that states do not exhaust their federal SCHIP allotments, by providing for the continuing redistribution of funds or additional federal appropriations, does the program begin to take on the characteristics of an entitlement similar to Medicaid? What overall implications does this have for the federal budget? Assessing issues associated with equity. The 10 years of SCHIP experience that states now have could help inform any policy decisions with respect to equity as part of the SCHIP reauthorization process. Although SCHIP generally targets children in families with incomes at or below 200 percent of FPL, 9 states are relatively more restrictive with their eligibility levels, while 14 states are more expansive, ranging as high as 350 percent of FPL. Given the policy goal of reducing the rate of uninsured among the nation’s children, to what extent should SCHIP funds be targeted to those states that have not yet achieved certain minimum coverage levels? Given current and future federal fiscal constraints, to what extent should the federal government provide federal financial participation above certain thresholds? What broader implications might this have for flexibility, choice, and equity across state programs? Another consideration is whether the formulas used in SCHIP—both the formula to determine the federal matching rate and the formula to allocate funds to states—could be refined to better target funding to certain states for the benefit of covering uninsured children. Because the SCHIP formula is based on the Medicaid formula for federal matching funds, it has some inherent shortcomings that are likely beyond the scope of consideration for SCHIP reauthorization. For the allocation formula that determines the amount of funds a state will receive each year, several analysts, including CRS, have noted alternatives that could be considered. These include altering the methods for estimating the number of children at the state level, adjusting the extent to which the SCHIP formula for allocating funds to states includes the number of uninsured versus low-income children, and incorporating states’ actual spending experiences to date into the formula. Considering the effects of any one or combination of these—or other—policy options would likely entail iterative analysis and thoughtful consideration of relevant trade-offs. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. For future contacts regarding this testimony, please contact Kathryn G. Allen at (202) 512-7118 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Carolyn L. Yocom, Assistant Director; Nancy Fasciano; Kaycee M. Glavich; Paul B. Gold; JoAnn Martinez-Shriver; and Elizabeth T. Morrison made key contributions to this statement. Appendix I: SCHIP Upper Income Eligibility by State, Fiscal Year 2005 expressed as a percentage of FPL 200 expressed as a percentage of FPL While Tennessee has not had a SCHIP program since October 2002, in January 2007, CMS approved Tennessee’s SCHIP plan, which covers pregnant women and children in families with incomes up to 250 percent of FPL. According to state information, the program will be implemented in early 2007. Children’s Health Insurance: State Experiences in Implementing SCHIP and Considerations for Reauthorization. GAO-07-447T. Washington, D.C.: February 1, 2007. Children’s Health Insurance: Recent HHS-OIG Reviews Inform the Congress on Improper Enrollment and Reductions in Low-Income, Uninsured Children. GAO-06-457R. Washington, D.C.: March 9, 2006. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Medicaid and SCHIP: States’ Premium and Cost Sharing Requirements for Beneficiaries. GAO-04-491. Washington, D.C.: March 31, 2004. SCHIP: HHS Continues to Approve Waivers That Are Inconsistent with Program Goals. GAO-04-166R. Washington, D.C.: January 5, 2004. Medicaid Formula: Differences in Funding Ability among States Often Are Widened. GAO-03-620. Washington, D.C.: July 10, 2003. Medicaid and SCHIP: States Use Varying Approaches to Monitor Children’s Access to Care. GAO-03-222. Washington, D.C.: January 14, 2003. Health Insurance: States’ Protections and Programs Benefit Some Unemployed Individuals. GAO-03-191. Washington, D.C.: October 25, 2002. Medicaid and SCHIP: Recent HHS Approvals of Demonstration Waiver Projects Raise Concerns. GAO-02-817. Washington, D.C.: July 12, 2002. Children’s Health Insurance: Inspector General Reviews Should Be Expanded to Further Inform the Congress. GAO-02-512. Washington, D.C.: March 29, 2002. Long-Term Care: Aging Baby Boom Generation Will Increase Demand and Burden on Federal and State Budgets. GAO-02-544T. Washington, D.C.: March 21, 2002. Medicaid and SCHIP: States' Enrollment and Payment Policies Can Affect Children's Access to Care. GAO-01-883. Washington, D.C.: September 10, 2001. Children’s Health Insurance: SCHIP Enrollment and Expenditure Information. GAO-01-993R. Washington, D.C.: July 25, 2001. Medicaid: Stronger Efforts Needed to Ensure Children’s Access to Health Screening Services. GAO-01-749. Washington, D.C.: July 13, 2001. Medicaid and SCHIP: Comparisons of Outreach, Enrollment Practices, and Benefits. GAO/HEHS-00-86. Washington, D.C.: April 14, 2000. Children’s Health Insurance Program: State Implementation Approaches Are Evolving. GAO/HEHS-99-65. Washington, D.C.: May 14, 1999. Medicaid: Demographics of Nonenrolled Children Suggest State Outreach Strategies. GAO/HEHS-98-93. Washington, D.C.: March 20, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In August 1997, Congress created the State Children's Health Insurance Program (SCHIP) with the goal of significantly reducing the number of low-income uninsured children, especially those who lived in families with incomes exceeding Medicaid eligibility requirements. Unlike Medicaid, SCHIP is not an entitlement to services for beneficiaries but a capped allotment to states. Congress provided a fixed amount--approximately $40 billion from fiscal years 1998 through 2007--to states with approved SCHIP plans. Funds are allocated to states annually. Subject to certain exceptions, states have 3 years to use each year's allocation, after which unspent funds may be redistributed to states that have already spent all of that year's allocation. GAO's testimony addresses trends in SCHIP enrollment and the current composition of SCHIP programs across the states, states' spending experiences under SCHIP, and considerations GAO has identified for SCHIP reauthorization. GAO's testimony is based on its prior work, particularly testimony before the Senate Finance Committee on February 1, 2007 (see GAO-07-447T). GAO updated this work with the Centers for Medicare & Medicaid Services' (CMS) January 2007 approval of Tennessee's SCHIP program. SCHIP enrollment increased rapidly during the program's early years but has stabilized over the past several years. As of fiscal year 2005, the latest year for which data are available, SCHIP covered approximately 6 million enrollees, including about 639,000 adults, with about 4 million enrollees in June of that year. Many states adopted innovative outreach strategies and simplified and streamlined their enrollment processes in order to reach as many eligible children as possible. States' SCHIP programs reflect the flexibility federal law allows in structuring approaches to providing health care coverage. As of July 2006, states had opted for the following from among their choices of program structures allowed: a separate child health program (18 states), an expansion of a state's Medicaid program (11), or a combination of the two (21). In addition, 41 states opted to cover children in families with incomes at 200 percent of the federal poverty level (FPL) or higher, with 7 of these states covering children in families with incomes at 300 percent of FPL or higher. Thirty-nine states required families to contribute to the cost of their children's care in SCHIP programs through a cost-sharing requirement, such as a premium or copayment; 11 states charged no cost-sharing. As of January 2007, GAO identified 15 states that had waivers in place to cover adults in their programs; these included parents and caretaker relatives of eligible Medicaid and SCHIP children, pregnant women, and childless adults. SCHIP spending was initially low, but now threatens to exceed available funding. Since 1998, some states have consistently spent more than their allotments, while others spent consistently less. States that earlier overspent their annual allotments over the 3-year period of availability could rely on other states' unspent SCHIP funds, a portion of which were redistributed to cover other states' excess expenditures. By fiscal year 2002, however, states' aggregate annual spending began to exceed annual allotments. As spending has grown, the pool of funds available for redistribution has shrunk. As a result, 18 states were projected to have "shortfalls" of SCHIP funds--meaning they had exhausted all available funds--in at least one of the final 3 years of the program. To cover projected shortfalls faced by several states, Congress appropriated an additional $283 million for fiscal year 2006. SCHIP reauthorization occurs in the context of debate on broader national health care reform and competing budgetary priorities, highlighting the tension between the desire to provide affordable health insurance coverage to uninsured individuals, including low-income children, and the recognition of the growing strain of health care coverage on federal and state budgets. As Congress addresses reauthorization, issues to consider include (1) maintaining flexibility within the program without compromising the primary goal to cover children, (2) considering the program's financing strategy, including the financial sustainability of public commitments, and (3) assessing issues associated with equity, including better targeting SCHIP funds to achieve certain policy goals more consistently nationwide. |
The military services use portable or handheld metal detectors as one of several devices to detect and clear hazards such as landmines. As we reported last year, the detection and clearance of buried explosives like landmines is very difficult, and no ideal solution has emerged.Low-metallic content landmines—generally plastic-encased explosives with some metal parts inside—are among the most difficult mines for a metal detector to find, especially when buried. Mines placed on or protruding above the ground surface do not pose the same detection problem as buried mines because it is possible that they could be detected visually. The typical portable metal detector uses electromagnetic induction technologies to find metal objects at or below the ground surface. These detectors induce a magnetic field, which in turn causes a secondary magnetic field to form around nearby objects that have conductive properties such as the metal in landmines. An object’s detectability is a function of the induced magnetic field’s strength and an object’s conductive properties, size, shape, and position. For example, copper, aluminum, and ordinary steel are good conductors and relatively easy to detect. Stainless steel is harder to detect than an identical piece of ordinary steel because it offers more resistance to the induced magnetic field and thus produces a weaker or smaller secondary magnetic field. Portable metal detectors operate on either the continuous wave or pulse method of transmitting and receiving. Continuous wave detectors induce and monitor magnetic fields continuously to sense any disruptions caused by a conductive object’s secondary field; pulse detectors transmit and receive in alternating cycles in search of secondary magnetic fields. In 1962, the Army fielded the AN/PSS-11, a continuous wave portable mine detector. The last AN/PSS-11s were purchased in 1972. In the late 1970s, the Army began a program to improve the AN/PSS-11’s durability and maintainability by replacing its outdated electronics. As the lead service for the Department of Defense (DOD), the Army developed such a detector, tested it successfully, and approved its production in 1984. However, separate attempts to produce the detector to Army specifications—in 1985 with one manufacturer and in 1987 with another—failed due to the manufacturers’ technical or financial problems. As the AN/PSS-11 became increasingly more difficult to support due to the unavailability of replacement parts, the Army was faced with a shortfall. In May 1990, the Army decided to forgo development of a new or improved detector and instead to purchase a commercially available detector as an interim solution to its immediate shortfall. After screening 12 commercially available metal detectors for sensitivity, suitability, and availability, the Army narrowed the field to two candidates—the AN-19/2 pulse detector made by Schiebel GmbH of Austria and the Metex 4.125 continuous wave detector made by Foerster Instruments, Inc. In March and July 1991, the Army awarded contracts to the respective manufacturers for test articles, with options for future buys. In December 1991, the Army selected the Schiebel detector to replace the AN/PSS-11 and designated it as the AN/PSS-12 Handheld Metallic Mine Detector. By the time the contract expired in March 1996, 18,235 AN/PSS-12s had been ordered and all but a few hundred had been delivered. The total cost of the detectors, including support items, came to $21.9 million. Of the total, 15,553 are for the Army, 571 for the Marine Corps, 326 for the Air Force, 323 for the Navy, and 1,462 for foreign military sales. As of March 1996, the Army had sent 261 of the detectors to Bosnia. The Air Force has also sent AN/PSS-12s to Bosnia. Until 2001, when the Army plans to field a new portable detector it is developing for low-metallic and nonmetallic mines, the AN/PSS-12 will remain the Army’s primary portable mine detector. According to defense intelligence information, low-metallic content mines have been a recognized threat for the last 14 years and are a prevalent threat in Bosnia. Low-metallic mines have been represented in Army tests of portable and other detection systems since 1983 and were included in the performance specifications used for the 1991 procurement of the AN/PSS-12. Army officials informed us that a separate technology effort was underway before 1991 to address the low-metallic and nonmetallic threat. The Army plans to complete this effort by fiscal year 2001. According to the National Ground Intelligence Center, mines with minimal metal content were first fielded in the early 1950s. For years, however, no criterion or standard existed for defining a mine as having low-metallic content. In the early 1980s, the Center established the U.S. M-19 mine, which contains 2.46 grams of metal, as the threshold for low-metallic mines. By this standard, only mines containing 2.46 grams of metal or less are considered as low-metallic threats. According to intelligence reports, over half of the landmines in Bosnia are buried, and about 75 percent of them are low-metallic mines. The metal content of these mines is confined to the aluminum casing around their chemical action fuzes. About eight different types of Yugoslav landmines with this type of fuze have been identified. The Center reported that some former Yugoslav mines containing no metal were known to have been manufactured. These mines’ fuzes are wrapped in plastic and would not be detectable by the AN/PSS-12 or any other metal detector. However, the mines recovered so far have all contained aluminum-clad fuzes. Fuzes used in some of these mines contain between 0.4 and 1.5 grams of aluminum. Depending on the type, these mines may contain from one to three fuzes, any one of which is capable of detonating the mine. Examples include the TMA-1 and TMA-5, which contain one fuze; the TMA-2, which contains two fuzes; and the TMA-3 and TMA-4, which contain three fuzes. The most difficult to detect are the PMA-1, which contains less than 0.4 grams of aluminum in its fuze, laid horizontally in the mine, and the PMA-2, which has a vertical fuze (a more difficult position for detection) containing 0.5 grams of aluminum. For detection purposes, the metallic content of multiple fuzes is not additive; according to Army officials, the fuzes are positioned far enough apart in the mine as to generally limit detection to one fuze at a time. According to Army officials, the purpose of the 1991 procurement was to buy a detector with performance equal to or better than the AN/PSS-11. The detection and other performance requirements for the 1991 procurement were contained in a modified military specification associated with the earlier attempt to develop an improved version of the AN/PSS-11. This specification required that the detector have a greater than 92-percent probability of detecting metallic mines and mines with small metallic content. The specification described the following targets to be detected in three different types of soils—sand, loam, and magnetite (an iron-based soil): a small steel pin, 4.5 millimeters long, to simulate the M-14 mine (detection of this pin was desired but not required in magnetite); a hollow aluminum tube, 44.5 millimeters long and 6.4 millimeters in a steel PMN-6 striker pin, 57 millimeters long, one-third of which was 4.8 millimeters in diameter and the remainder 9.5 millimeters in diameter; and a simulated M-16 mine. According to the 2.46 gram standard, the M-14 pin and the aluminum tube represented low-metallic targets. The M-16 is a metal-clad mine. The PMN-6 striker pin falls somewhere between the M-16 and the low-metallic targets. The designation PMN-6 refers to a British-made training mine that is a replica of the Soviet PMN mine. Like the Soviet mine, the PMN-6 training mine has a nonmetallic case and contains several metal components in addition to the striker pin, which collectively amount to over 17 grams of metal. According to the National Ground Intelligence Center, the striker pin itself would not qualify as a low-metallic target because it contains several times the amount of metal as the M-19. According to Army officials, the Army began developing other technologies in the mid-1980s to detect low-metallic and nonmetallic mines. Under a program now known as the Handheld Standoff Mine Detection System, a detector is being developed that integrates ground-penetrating radar, infrared, and metal detection technologies, along with electronics that are intended to synthesize and interpret the signals from the three sensors for the operator. The detector is now in competitive prototype testing and is slated for a production decision in fiscal year 2001. A gap in detection capability against low-metallic and nonmetal mines may remain until then. Our September 1995 report on unexploded ordnance provides additional information on these technologies. To provide some additional capability for U.S. forces in Bosnia, the Army is evaluating commercially available detectors that combine technologies such as ground-penetrating radar with electromagnetic induction methods. These detectors do not possess all of the capabilities planned for the detector in prototyping. According to Army officials, recent tests of such systems demonstrated a 70-percent detection capability against low-metallic and nonmetallic mines. The Army does not consider this detection rate acceptable for use by troops in the field. Further testing is planned. The Army’s Test and Experimentation Command, under the auspices of the Operational Test and Evaluation Command, conducted two operational tests during 1991 to assess the performance of the candidate metal detectors in a field environment. Short of war, operational testing is the most realistic way of assessing a system’s effectiveness and suitability for fielding. However, the operational tests had several shortcomings that complicate the assessment of the comparative performance of the two detector candidates and the baseline AN/PSS-11 against low-metallic mines. In the first test, the Schiebel detector found 3.4 percent of the low-metallic targets, compared with 24.2 percent for the AN/PSS-11. Because the Foerster detector was not included in the first test and the low-metallic targets were excluded from the second test, the Foerster was not tested against these low-metallic targets and no comparison could be made. The Foerster detected twice as many of the lowest metal content targets present at the beginning of the second test, but the Army concluded the targets were not representative of the higher metallic content Soviet mine and ruled them invalid. As performance against higher metallic targets was equal, price became the deciding factor in the procurement decision. The first operational test of the detector candidates was conducted during March 20-28, 1991. Prior to this test, the Army had screened 12 different commercial detectors and had eliminated all but one because of (1) technical, performance, or production shortcomings or (2) high prices. Two Foerster candidates were among the detectors eliminated on the basis of price. Thus, the Schiebel was the only detector forwarded for operational testing with the baseline AN/PSS-11 detector. This test included four target types: metal-clad training M-15 and M-16 mines, the M-14 pin, and the PMN-6 striker pin. Given that the hollow aluminum tube described in the specification was not used in the test, the M-14 pin was the only low-metallic target. The results of the test are shown in table 1. It is strongly recommended that the Government not purchase this mine detector as a replacement for the AN/PSS-11 at this time. Rather, another survey should be conducted to identify candidate mine detectors that meet military specifications outlined in the test and evaluation master plan. Further technical and operational testing should result in a more suitable replacement mine detector. against mines with small metallic content (i.e., the M-14 and the PMN-6), the AN-19/2 fell considerably short of the PS requirement. . .Indeed, its performance during test was distinctly inferior to that of the AN/PSS-11 under the same conditions, although the AN/PSS-11 itself did not meet the PS requirement either. The Test and Evaluation Command did state that the procurement decision should not depend too heavily on the detectors’ inability to detect low-metallic mines because such mines were just a step away from nonmetal mines, which would render a metallic mine detector useless. Nonetheless, the Command recommended that the Army (1) not approve the Schiebel for fielding as the AN/PSS-12 and (2) reexamine the role of the mine detector in the Army and confirm that the detection of mines with small metallic content remained a valid need. Ultimately, the Army decided to eliminate the M-14 target from further testing because it concluded that the target’s metal content was so low that it was essentially nonmetal. It was not replaced with another low-metallic target. Army officials informed us that the user representative at the time did not want to reject the Schiebel on the basis of its performance against less lethal mines such as the M-14—considered likely to injure, rather than kill—if it could detect more lethal mines that could kill several individuals. This was a significant decision because the M-14 pin had been cited in the performance specification and had been used in Army testing of portable mine detectors since 1983. Such testing included the attempted product improvements of the AN/PSS-11 and the original screening of the 12 commercial detector candidates. The Army realized in 1992 that the M-14 pin contained only a portion— 0.29 grams—of the total metal in the M-14 mine. According to testing conducted in 1996, the actual mine is more detectable than the target used. Had the Army known this at the time of the 1991 testing, it may have been able to substitute a more authentic low-metallic mine target. Following the filing of a bid protest, the Army decided to readmit one of Foerster’s two original candidates to the competition and therefore it participated in the second operational test. The second operational test was held from September 17, 1991, to October 4, 1991. It included three examples each of the Foerster, Schiebel, and AN/PSS-11. As in the previous test, this test used targets designated as PMN-6s to simulate low-metallic mines. However, the second test used PMN-6 training mine casings, which contained the steel striker pin, a spring, and a small washer. Shortly before the test began, representatives from the program manager’s office and the U.S. Army Engineer School (which represented the user) contended that the PMN-6 target did not contain as much metal as a real Soviet PMN mine. They stated that metal would have to be added to the PMN-6 test targets already buried to make them realistic. However, the purpose of the target was not to replicate the Soviet mine. In fact, the test report indicated that the purpose of the PMN-6 striker pin was to simulate the M-19 mine. The Soviet mines that the PMN-6 was modeled after are not low-metallic mines. The National Ground Intelligence Center reports that no Soviet landmine contains less than 8 grams of metal, which is more than the 2.46-gram threshold. While it would have been reasonable to ensure that the target was a fair replica of either the M-19 low-metallic mine or the striker pin described in the specification, it was not reasonable to insist that the target replicate the Soviet mine. The test team maintained that adding metal to the PMN-6 target could make its detectability climb to 100 percent; thus, there would be no way to discriminate one detector’s performance from another’s. Ultimately, it was agreed that a 2-inch metal washer would be added to each PMN-6 target. This was done by inserting the washers beneath the surface and on top of the buried targets, without digging them up. Because the M-14 low-metallic target had already been eliminated, adding metal to the PMN-6 was a key decision because it effectively eliminated the only remaining target the test team considered to have a metal content low enough to differentiate the performance of one detector from another. The Test and Experimentation Command had planned a 1-day pilot test to work out procedures and firm up preparations for the operational test. The Command decided to conduct the pilot test with the PMN-6 targets in their original condition—without the large washer. Table 2 shows the results of the pilot test. The percentages shown above are the averages for the three detectors of each type used. The best performance by a Foerster was 76.67 percent; by a Schiebel, 43.33 percent; and by an AN/PSS-11, 43.33 percent. While these results were included in the test report, they were excluded in the analysis of operational test results for the procurement decision on the flawed basis that the target was unrepresentative of a Soviet mine. After the pilot test, the second operational test was conducted with the PMN-6 targets augmented with the large metal washers. The other two mine targets included in the test were metal-clad mines and thus had high metal content. These were the M-8, a training version of the M-16 mine, and the TMN-46, a training version of a Soviet antitank mine. Table 3 shows the results of the test. These results showed that against the high-metallic mine targets remaining in the operational test, all three detectors found virtually all the mines and passed the Army’s 92-percent detection requirement. The results also confirmed the test team’s concern that adding metal to the PMN-6 target could cause detection percentages to climb to 100 percent for all the detectors. The Army’s decision to procure the Schiebel was based on detection performance against only the high metal content mines. In a December 13, 1991, memorandum, the Chairman of the Source Selection Board in charge of selecting the best detector candidate concluded that the performance difference between the detectors was not significant and that the additional cost of the Foerster was not justified by any significant increase in technical or operational benefit. Army officials informed us that because none of the detectors, including the AN/PSS-11, could meet the 92-percent requirement against low-metallic mines, they were equally unable to satisfactorily detect such mines. Therefore, the ability to detect low-metallic mines was no longer a discriminating factor in selecting a replacement for the AN/PSS-11. Nonetheless, in the only comparable operational test, the Foerster detector demonstrated a significantly better ability to detect the lowest metal mine target—the pilot test’s PMN-6 target—than the Schiebel detector. Whether the Foerster’s better performance in the pilot test was worth its higher price was not assessed because low-metallic mines had already been eliminated as a factor by the time the decision was made. Over the years, the Army has gathered performance data on portable mine detectors from a number of sources, including technical tests, operational tests, demonstrations, and from actual use in operations, such as in Somalia. Regardless of how data is gathered, the performance of portable mine detectors is affected by several factors that, if not controlled, make it difficult to compare one test or operation with another. In the numerous tests and demonstrations of portable mine detectors conducted since 1983, these factors have not been held to consistent, realistic, or technically sound standards. The factors include target type, target burial depth and position, soil type and moisture content, and the distance between the detector head and ground surface. Performance is also affected by the proficiency of the operator, including such factors as maintaining the correct height and speed of the detector head as it is swept back and forth in the search for targets, the level of training, and the operator’s ability to pick up audio and visual cues that can help indicate the presence of a mine. In addition, as suggested by test results, different detectors of the same model can vary in performance. While tests, by their nature, are conducted under controlled conditions to provide for valid data collection and analysis, technical and operational tests are conducted under different circumstances and are interpreted differently. Technical testing is intended to determine the technical capabilities of a detector under ideal conditions. While such testing can eliminate detectors that do not have the ability to meet performance requirements, it is not intended to assess performance under field conditions. Operational testing is much more realistic than technical testing, as it can introduce more factors that affect performance results, most importantly, the operator-machine interface. The two operational tests of portable mine detectors the Army conducted in 1991 are illustrative of how difficult it is to isolate detector performance from other factors when comparing test results. Their test conditions are compared in table 4. Some tests are actually demonstrations, which fall somewhere between technical and operational testing, although they do not necessarily provide the discipline or data to support statistically valid or independent data analysis. Demonstrations of portable mine detectors have been conducted in a field environment; however, the detectors have been operated by contractor personnel or Army civilian personnel. Again, as in operational testing, they must contend with a variety of factors that can affect detector performance. While demonstrations do not enable conclusions to be drawn about a detector’s ability to meet military requirements, they are a vehicle for quickly gauging a detector’s potential performance in the field. While the use of portable mine detectors in actual operations provides realistic information on detection performance, the number of mines detected are not usually recorded, and the number of mines missed, absent maps and records, may not be known. Results can also be site specific as to soil type, moisture content, and temperatures. Thus, these operations do not lend themselves to quantification of a detector’s performance. Moreover, one mishap can prove fatal. The AN/PSS-12 was used by U.S. forces in Somalia and by U.S. contractors in Kuwait and is currently deployed with U.S. forces in Bosnia. An Army after-action report from operations in Somalia states that the AN/PSS-12 could not detect low-metallic mines, but it offers no specifics on the detection shortfalls. Although many landmines were reportedly found by U.S. contractors in Kuwait using the Schiebel and other metal detectors, the fact that they were buried in sand and in patterns made them easier to find than might be the case in other situations. These operations do not provide information on the percentage or number and types of mines that were found by the metal detectors, nor do they indicate what mines were not detected. The performance of the AN/PSS-11 in several tests conducted since 1983 illustrates how the measured performance of a detector can vary from one test to the next. In a 1983 field test outdoors at Fort Belvoir, Virginia, the AN/PSS-11 detected 80 percent of M-14 mine targets. In 1985 testing at the Fort Belvoir indoor mine lane facility, prototypes of the product-improved version of the AN/PSS-11 detected none of the M-14 targets buried in sand and 67 percent of the M-14 targets on the surface. In a 1988 field test to establish the AN/PSS-11’s detection capabilities as a standard for an Army development of a vehicle-mounted detection system, the AN/PSS-11 detected 82.5 percent of buried M-19 mine targets. As stated previously, the AN/PSS-11 detected 24.2 percent of M-14 targets, 80.8 percent of PMN-6 striker pins, and 28.9 percent of PMN-6 targets (without the large washer) in the 1991 operational tests. The data from these various sources defy a definitive conclusion on the performance of a detector that has been in the Army’s inventory for 30 years. According to the Army, U.S.troops have not experienced problems with the AN/PSS-12 in Bosnia. Army officials have cited the successful use of the detector by other countries and the detectability of low-metallic mines in Bosnia as further evidence of the AN/PSS-12’s potential for performance there. However, this information is not consistent with the Army’s 1991 test results and information from other sources. Consequently, we believe the potential effectiveness of the AN/PSS-12 against low-metallic mines in Bosnia is inconclusive. The steps the Army has taken to minimize the threats posed by landmines there and the resultant infrequent reliance on the AN/PSS-12 may help to explain why the detector’s poor performance against low-metallic targets in testing has not been exhibited in Bosnia. While the Army does not know the percentage of each type of mine detected by the AN/PSS-12 since deploying to Bosnia, officials said that when the detector has been used it has worked well. As of July 1996, they reported that no U.S. troop casualties had occurred as a result of the detector’s having failed to detect a mine in Bosnia. Army officials noted that several other countries purchased the Schiebel detector before the United States, including Germany, Canada, Israel, Sweden, and the United Kingdom. They said that Canada and Sweden successfully used the Schiebel in the former Yugoslavia before the U.S. troops deployed there. The Schiebel was the detector of choice by contractors that conducted mine-clearing operations in Kuwait and by the United Nations in several of its demining operations. According to Army officials, its division engineer in Bosnia was not interested in any performance enhancements as the AN/PSS-12 was performing fine. The Army also believes that the mines found so far in Bosnia have had enough metal content to be detectable by the AN/PSS-12. While these mines are classified as low-metallic mines, they reportedly contain more metal than the M-14 target used in the March 1991 operational test. More importantly, the metal contained in the Bosnian mines is aluminum. Because aluminum is lighter than steel, a piece of aluminum that weighs the same as a piece of steel would be considerably larger. Thus, according to Army officials, the aluminum in the Bosnian mines not only weighs more than the M-14 test target—it would be physically larger as well. Other information clouds an overall picture of the AN/PSS-12’s use in operations. During the course of our review, we learned that Germany has decided to replace its Schiebel detectors with a detector made by Vallon GmbH of Germany, and the Netherlands is using a Foerster detector in Bosnia. In 1993, the United Kingdom replaced its Schiebel detectors in Cambodia. A State Department official assisting with the international humanitarian demining effort in Bosnia informed us that the AN/PSS-12 is used only in conjunction with probes (pointed rods used by hand). The Marine Corps informed us that it prefers the technology of the AN/PSS-11 and currently uses the old detector in Guantanamo Bay, Cuba. In April 1996, the Air Force issued a message to its explosive ordnance technicians deployed in Bosnia to clear landmines and other explosives from airfields, cautioning them that the AN/PSS-12 does not have the sensitivity to detect low-metallic mines they may encounter. The Air Force is processing an urgent contracting action to purchase another metal detector to replace its AN/PSS-12s in Bosnia. This action is unrelated to the Army’s near-term effort to evaluate commercial detectors that combine technologies for potential application to Bosnia. We attempted to verify that the aluminum found in mines in Bosnia was in fact more detectable than the steel targets used in the 1991 testing. We contacted several countermine, testing, and explosive ordnance organizations within the services and none reported that they had developed credible data on the comparative detectability of different metals. At our request, a manufacturer of measurement and detection equipment compared the detectability of an aluminum target approximating a 0.4-gram aluminum fuze found in Bosnian mines with a steel target approximating the M-14 pin used in Army’s tests. The comparison did not show that the aluminum target was unequivocally more detectable than the steel target. We did not attempt to assess how the detectability of a 1- to 1.5-gram piece of aluminum found in some mines in Bosnia would compare with the more substantial PMN-6 striker pin used in testing. According to information we obtained from Department of State, Defense Intelligence Agency, and Army officials, several factors have minimized the risks landmines pose to U.S. troops in Bosnia. These steps include the following: The former warring parties, who are responsible for removing landmines, have provided maps, when available, of mined areas so that these areas can be avoided. For the most part, landmines are believed to be concentrated in known zones of separation that formerly existed between the warring factions. These zones are avoided when possible. However, a State Department official said less is known about the landmine threat outside these zones. Because U.S. forces are not taking ground as they would in a combat situation, they can move along established routes or roads. This gives combat engineers the opportunity to run rollers down the routes several times to detonate mines before any attempts are made at dismounted mine detection. Most main routes are believed to be safe. Some mine survey, route clearance assurance, and site clearance work has been contracted out. According to the State Department, areas considered cleared by the former warring parties must still be verified by peacekeeping forces. This is because the warring parties (1) are responsible for clearing areas only within the first 30 days after turning the areas over, (2) do not necessarily have the best mine detection and clearance equipment or training, and (3) did not prepare many maps of mined areas. Army officials have described their approach to mined areas in Bosnia as follows. All personnel are provided with extensive mine-awareness training before going into the theater. Before U.S. forces move into an area, an intelligence assessment is made. Discussions are held with the former warring parties to determine whether the area is mined and if so, what kinds of mines were used. At a more detailed level, some exploration may be done by engineers using probes to find sample mines. The troops can then verify whether the mines are consistent with the initial assessment. Data sheets on the threat mines are available that describe the characteristics of each mine and help make an accurate identification. If an area cannot be accessed by rollers, the combat engineers then assess whether the mines found can be detected with the AN/PSS-12. Army officials said they do this by actually checking the detector against the sample mines found in the ground. If the mines can be detected with the AN/PSS-12, then U.S. troops can go in dismounted. If the mines are not detectable, U.S. troops do not go in dismounted. As a last resort, probes could be used. Had the 1991 operational testing properly portrayed low-metallic mines, the Army may have had greater assurance that the detector it selected as the AN/PSS-12 was the best choice at the time against the full range of landmines. The limitations of this testing are perhaps more apparent now than at the time; while the testing became focused on higher metallic content mines, low-metallic content mines are prevalent in Bosnia. Although testing may not be able to replicate all of the conditions expected in actual operations, it should provide a sound assessment of detection and other performance capabilities that can serve as a consistent baseline for comparing results from test to test. Because the 1991 testing did not provide such a foundation, an assessment of the AN/PSS-12’s performance in Bosnia or any operation is perhaps more subjective than it should be. Accordingly, we recommend that the Secretary of Defense establish and enforce realistic and consistent test standards for testing countermine and mine detection systems that reflect known threat mines and the conditions under which they are likely to be encountered. Such standards should be applied not only to the acquisition of new systems but to the evaluation of near-term or experimental solutions as well. DOD concurred with our recommendation to establish realistic and consistent test standards. It also noted that the research, development, testing, and evaluation of countermine and mine detection systems were being reviewed by an unexploded ordnance clearance executive committee and steering group (see app. I). Although DOD concurred with our recommendation, it stated that the soldiers in Bosnia are not in danger due to the performance of the AN/PSS-12 in the presence of low-metallic mines and disagreed with any implication to the contrary. DOD reiterated that U.S. forces avoid mines when possible, using devices such as rollers and probes in addition to the AN/PSS-12 when mines are encountered, and that other countries selected the same detector before the Army did. These points were covered in the draft report. The information available to date supports DOD’s characterization of the relative safety of U.S. forces operating in the presence of landmines in Bosnia. The analytical dilemma is in reconciling the poor performance of the AN/PSS-12 against low-metallic targets in operational testing with its reported satisfactory performance in Bosnia where low-metallic mines are prevalent. We believe it is the prudent steps taken by the Army to avoid and minimize the landmine threat in Bosnia—more so than the capability of the AN/PSS-12 or the detectability of the low-metallic content mines there relative to the test targets—that explains the difference between the detector’s performance in operational testing with its experience in Bosnia. DOD also noted that an independent technical test conducted in June 1996 within DOD shows that the AN/PSS-12 can consistently detect M-14 low-metallic mines when inert mines are used instead of targets. The data from this test indicates that the inert M-14 mine is more detectable by the AN/PSS-12 than the M-14 firing pin first used and later removed as a target in the 1991 operational testing, although no detection percentages were obtained to measure consistency. The improvement is attributed to the fact that the inert mine contains more metal than the firing pin. The June 1996 test does raise additional questions about the usefulness of the information obtained in Army testing since 1983 that used the M-14 firing pin as a target. However, it does not supplant the 1991 operational test results because it was a limited technical test and was not intended to replicate a realistic environment. In the June 1996 test, landmines were not buried but placed on the ground with the detectors held directly over them. The essence of the test was to lower the detector over the mine and record the distance at which the detection was made; no searching was involved. By comparison, in the pilot test phase of the September-October 1991 operational test, the Schiebel detector found only 32.2 percent of the PMN-6 targets, which contained significantly more metal than the inert M-14 mine. The need to put the June 1996 test results into the proper perspective underscores the value of establishing realistic and consistent test standards. To obtain information for this report, we reviewed numerous documents relating to the test and evaluation of portable mine detectors, including several military services’ test reports since 1983, the contract file on the AN/PSS-12 procurement, files from previous investigations of the AN/PSS-12 procurement conducted within DOD, the after-action report on Somalia, threat publications prepared by the National Ground Intelligence Center, Army, Navy, and U.S. Marine Corps evaluations, and evaluations conducted by the Naval Explosive Ordnance Disposal Technology Division. We interviewed officials from the Office of the Secretary of Defense; the Departments of State, the Army, the Navy, and the Air Force; the U.S. Marine Corps; the Defense Intelligence Agency; the National Ground Intelligence Center; and the Joint Naval Explosive Ordnance Disposal Technology Division. We also interviewed current and former Army program officials, representatives from the Army contracting office at the Army Aviation and Troop Command, current and former Army user representatives from the Army Engineer School, representatives from the Army Test and Experimentation Command involved with the conduct of both operational tests, and a representative from the Army Waterways Experimentation Station that supplied PMN-6 mines for the second operational test. We did not visit Bosnia-Herzegovina, but information was obtained from Army officials in direct contact with units there and from other sources as indicated. We also interviewed representatives from detection equipment manufacturers and, at our request, the Canadian firm, Geonics, Ltd., conducted a laboratory test to compare the detectability of steel and aluminum targets. We conducted our review from December 1995 to July 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees and the Secretary of Defense. We will also make copies available to others upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report were Paul L. Francis and James B. Dowd. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Army's development of a portable land mine detector, focusing on: (1) how the Army's AN/PSS-12 mine detector performed in detecting low-metallic mines in procurement tests; (2) the nature of the land mine threat in Bosnia-Herzegovina; and (3) the mine detector's potential effectiveness in Bosnia. GAO found that: (1) the Army has not clearly demonstrated the ability of its AN/PSS-12 mine detector to detect low metallic mines; (2) the detector performed poorly during operational testing and failed to meet the Army's 92-percent detection requirement against low metallic mines; (3) although both candidate detectors performed equally well after the Army removed low metallic targets from the procurement tests, the Army selected the AN/PSS-12 because of its lower price; (4) the detector's field accuracy is questionable, since the Army did not sufficiently control other environmental and operating factors that can affect detector performance; (5) the detector's usefulness in Bosnia may be limited because about 75 percent of the buried mines have a low metallic content; (6) although the detector's reported performance in Bosnia is good, the Army has limited the detector's use there; (7) the Air Force has warned its personnel in Bosnia that the detector is not sufficiently sensitive to low metallic mines and some countries have switched to other mine detectors; and (8) the Army has reduced its reliance on the detector through alternative threat-reduction practices, such as extensive personnel training in mine awareness, avoiding or carefully selecting routes through suspected mine fields, and using heavy equipment to clear paths. |
The Aviation and Transportation Security Act (ATSA) established TSA as the federal agency with primary responsibility for securing the nation’s civil aviation system, which includes the screening of all passenger and property transported from and within the United States by commercial passenger aircraft. In accordance with ATSA, all passengers, their accessible property, and their checked baggage are screened pursuant to TSA-established procedures at the 463 airports presently regulated for security by TSA. These procedures generally provide, among other things, that passengers pass through security checkpoints where they and their identification documents, and accessible property, are checked by transportation security officers (TSO), other TSA employees, or by private-sector screeners under TSA’s Screening Partnership Program. Airport operators, however, also have direct responsibility for implementing TSA security requirements such as those relating to perimeter security and access controls, in accordance with their approved security programs and other TSA direction. TSA relies upon multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. These layers include behavior detection officers (BDOs), who examine passenger behaviors and appearances to identify passengers who might pose a potential security risk at TSA-regulated airports; travel document checkers, who examine tickets, passports, and other forms of identification; TSOs responsible for screening passengers and their carry-on baggage at passenger checkpoints, using x-ray equipment, magnetometers, Advanced Imaging Technology, and other devices; random employee screening; and checked-baggage screening systems. DHS’s Science and Technology Directorate (S&T) and TSA have taken actions to coordinate and collaborate in their efforts to develop and deploy technologies for aviation security. For example, they entered into a 2006 memorandum of understanding for using S&T’s Transportation Security Laboratory, and they established the Capstone Integrated Product Team for Explosives Prevention in 2006 to help DHS, TSA, and the U.S. Secret Service to, among other things, identify priorities for explosives prevention. Our past work has found that technology program performance cannot be accurately assessed without valid baseline requirements established at the program start. Without the development, review, and approval of key acquisition documents, such as the mission need statement, agencies are at risk of having poorly defined requirements that can negatively affect program performance and contribute to increased costs. For example, in June 2010, we reported that over half of the 15 DHS programs we reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, or establishing acquisition program baselines. For example, TSA’s Electronic Baggage Screening Program did not have a department-approved program baseline or program requirements, but TSA is acquiring and deploying next- generation explosive detection technology to replace legacy systems. We made a number of recommendations to help address issues related to these procurements as discussed below. DHS has generally agreed with these recommendations and, to varying degrees, has taken actions to address them. In addition, our past work has found that TSA faces challenges in identifying and meeting program requirements in a number of its programs. For example: In July 2011, we reported that TSA revised its explosive detection system (EDS) requirements to better address current threats and plans to implement these requirements in a phased approach. However, we reported that some number of the EDSs in TSA’s fleet are configured to detect explosives at the levels established in the 2005 requirements. The remaining EDSs are configured to detect explosives at 1998 levels. When TSA established the 2005 requirements, it did not have a plan with the appropriate time frames needed to deploy EDSs to meet the requirements. To help ensure that EDSs are operating most effectively, we recommended that TSA develop a plan to deploy and operate EDSs to meet the most recent requirements to ensure new and currently deployed EDSs are operated at the levels in established requirements. DHS concurred with our recommendation and has begun taking action to address them; for example, DHS reported that TSA has developed a plan to evaluate its current fleet of EDSs to determine the extent to which they comply with these requirements. However, our recommendation is intended to ensure that TSA operate all EDSs at airports at the most recent requirements. Until TSA develops a plan identifying how it will approach the upgrades for currently deployed EDSs—and the plan includes such items as estimated costs and the number of machines that can be upgraded—it will be difficult for TSA to provide reasonable assurance that its upgrade approach is feasible or cost- effective. Further, while TSA’s efforts are positive steps, it is too early to assess their effect or whether they address our recommendation. In October 2009, we reported that TSA passenger screening checkpoint technologies were delayed because TSA had not consistently communicated clear requirements for testing the technologies. We recommended that TSA evaluate whether current passenger screening procedures should be revised to require the use of appropriate screening procedures until TSA determined that existing emerging technologies meet their functional requirements in an operational environment. TSA agreed with this recommendation. However, communications issues with the business community persist. In July 2011, we reported that vendors for checked-baggage screening technology expressed concerns about the extent to which TSA communicated with the business community about the current EDS procurement. TSA agreed with our July 2011 recommendation to establish a process to communicate information regarding TSA’s EDS acquisition to EDS vendors in a timely manner and reported taking actions to address it such as soliciting more feedback from vendors through kickoff meetings, industry days, and classified discussions of program requirements. Our prior work has also shown that not resolving problems discovered during testing can sometimes lead to costly redesign and rework at a later date. Addressing such problems before moving to the acquisition phase can help agencies better manage costs. Specifically: In June 2011 we reported that S&T’s Test & Evaluation and Standards Office, responsible for overseeing test and evaluation of DHS’s major acquisition programs, reviewed or approved test and evaluation documents and plans for programs undergoing testing, and conducted independent assessments for the programs that completed operational testing. DHS senior-level officials considered the office’s assessments and input in deciding whether programs were ready to proceed to the next acquisition phase. However, the office did not consistently document its review and approval of components’ test agents—a government entity or independent contractor carrying out independent operational testing for a major acquisition. In addition, the office did not document its review of other component acquisition documents, such as those establishing programs’ operational requirements. We recommended, among other things, that S&T develop mechanisms to document its review of component acquisition documentation. DHS concurred and reported actions underway to address them. In July 2011, we reported that TSA experienced challenges in collecting explosives data on the physical and chemical properties of certain explosives needed by vendors to develop EDS detection software. These data are also needed by TSA for testing the machines to determine whether they meet established requirements prior to their procurement and deployment to airports. TSA and S&T have experienced these challenges because of problems associated with safely handling and consistently formulating some explosives. The challenges related to data collection for certain explosives have resulted in problems carrying out the EDS procurement as planned. Specifically, attempting to collect data for certain explosives while simultaneously pursuing the EDS procurement delayed the EDS acquisition schedule. We recommended that TSA develop a plan to ensure that TSA has the explosives data needed for each of the planned phases of the 2010 EDS requirements before starting the procurement process for new EDSs or upgrades included in each applicable phase. DHS stated that TSA modified its strategy for the EDS’s competitive procurement in July 2010 in response to the challenges in working with the explosives for data collection by removing the data collection from the procurement process. While TSA’s plan to separate the data collection from the procurement process is a positive step, we feel, to fully address our recommendation, a plan is needed to establish a process for ensuring that data are available before starting the procurement process for new EDSs or upgrades for each applicable phase. In July 2011, we also reported that TSA revised EDS explosives detection requirements in January 2010 to better address current threats and plans to implement these requirements in a phased approach. TSA had previously revised the EDS requirements in 2005 though it did not begin operating EDS to meet the 2005 requirements until 2009. Further, TSA deployed a number of EDSs that had the software necessary to meet the 2005 requirements, but because the software was not activated, these EDSs were still detecting explosives at levels established before TSA revised the requirements in 2005. TSA officials stated that prior to activating the software in these EDSs, they must conduct testing to compare the false-alarm rates for machines operating at one level of requirements to those operating at another level of requirements. According to TSA officials, the results of this testing would allow them to determine if additional staff are needed at airports to help resolve false alarms once the EDSs are configured to operate at a certain level of requirements. TSA officials told us that they plan to perform this testing as a part of the current EDS acquisition. In October 2009, we reported that TSA deployed explosives trace portals, a technology for detecting traces of explosives on passengers at airport checkpoints, in January 2006 even though TSA officials were aware that tests conducted during 2004 and 2005 on earlier models of the portals suggested the portals did not demonstrate reliable performance in an airport environment. TSA also lacked assurance that the portals would meet functional requirements in airports within estimated costs and the machines were more expensive to install and maintain than expected. In June 2006, TSA halted deployment of the explosives trace portals because of performance problems and high installation costs. We recommended that to the extent feasible, TSA ensure that tests are completed before deploying checkpoint screening technologies to airports. DHS concurred with the recommendation and has taken action to address it, such as requiring more-recent technologies to complete both laboratory and operational tests prior to deployment. For example, TSA officials stated that, unlike the explosive trace portal, operational testing for the Advanced Imaging Technology (AIT) was successfully completed late in 2009 before its deployment was fully initiated. We are currently evaluating the testing conducted on AIT as part of an ongoing review. According to the National Infrastructure Protection Plan, security strategies should be informed by, among other things, a risk assessment that includes threat, vulnerability, and consequence assessments, information such as cost-benefit analyses to prioritize investments, and performance measures to assess the extent to which a strategy reduces or mitigates the risk of terrorist attacks. Our prior work has shown that cost-benefit analyses help congressional and agency decision makers assess and prioritize resource investments and consider potentially more cost-effective alternatives, and that without this ability, agencies are at risk of experiencing cost overruns, missed deadlines, and performance shortfalls. For example, we have reported that TSA has not consistently included these analyses in its acquisition decision making. Specifically: In October 2009, we reported that TSA had not yet completed a cost- benefit analysis to prioritize and fund its technology investments for screening passengers at airport checkpoints. One reason that TSA had difficulty developing a cost-benefit analysis was that it had not yet developed life-cycle cost estimates for its various screening technologies. We reported that this information was important because it would help decision makers determine, given the cost of various technologies, which technology provided the greatest mitigation of risk for the resources that were available. We recommended that TSA develop a cost-benefit analysis. TSA agreed with this recommendation and has completed a life-cycle cost estimate, but has not yet completed a cost-benefit analysis. In March 2010, we reported that TSA had not conducted a cost- benefit analysis to guide the initial AIT deployment strategy. Such an analysis would help inform TSA’s judgment about the optim deployment strategy for the AITs, as well as provide information to inform the best path forward, considering all elements of the screening system, for addressing the vulnerability identified by the attempted December 25, 2009, terrorist attack. We recommended that TSA conduct a cost-benefit analysis. TSA completed a cost- effectiveness analysis in June 2011 and provided it to us in August 2011. We are currently evaluating this analysis as part of our ongoing AIT review. Since DHS’s inception in 2003, we have designated implementing and transforming DHS as high risk because DHS had to transform 22 agencies—several with major management challenges—into one department. This high-risk area includes challenges in strengthening DHS’s management functions, including acquisitions; the effect of those challenges on DHS’s mission implementation; and challenges in integrating management functions within and across the department and its components. Failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. In part because of the problems we have highlighted in DHS’s acquisition process, implementing and transforming DHS has remained on our high- risk list. DHS currently has several plans and efforts underway to address the high-risk designation as well as the more specific challenges related to acquisition, technology development, and program implementation that we have previously identified. In June 2011, DHS reported to us that it is taking steps to strengthen its investment and acquisition management processes across the department by implementing a decision-making process at critical phases throughout the investment life cycle. For example, DHS reported that it plans to establish a new model for managing departmentwide investments across their life cycles. Under this plan, S&T would be involved in each phase of the investment life cycle and participate in new councils and boards DHS is planning to create to help ensure that test and evaluation methods are appropriately considered as part of DHS’s overall research and development investment strategies. According to DHS, S&T will help ensure that new technologies are properly scoped, developed, and tested before being implemented. DHS also reports that it is working with components to improve the quality and accuracy of cost estimates and has increased its staff during fiscal year 2011 to develop independent cost estimates, a GAO best practice, to ensure the accuracy and credibility of program costs. DHS reports that four cost estimates for level 1 programs have been validated to date, but did not explicitly identify whether any of the Life Cycle Cost Estimates were for TSA programs. The actions DHS reports taking or has underway to address the management of its acquisitions and the development of new technologies are positive steps and, if implemented effectively, could help the department address many of these challenges. However, showing demonstrable progress in executing these plans is key. In the past, DHS has not effectively implemented its acquisition policies, in part because it lacked the oversight capacity necessary to manage its growing portfolio of major acquisition programs. Since DHS has only recently initiated these actions, it is too early to fully assess their effect on the challenges that we will need have identified in our past work. Going forward, we believe DHS to demonstrate measurable, sustainable progress in effectively implementing these actions. Chairman Rogers, Ranking Member Jackson Lee, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For questions about this statement, please contact Steve Lord at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last p of this st atement. Individuals making key contributions to this testimony are David M. Bruno, Assistant Director; Robert Lowthian; Scott Behen; Ryan Consaul; Tom Lombardi; Bill Russell; Nate Tranquilli; and Julie Silvers. Key contributors for the previous work that this testimony is based on are listed within each individual product. Aviation Security: TSA Has Made Progress, but Additional Efforts Are Needed to Improve Security. GAO-11-938T. Washington, D.C.: September 16, 2011. Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington, D.C.: September 7, 2011. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies. GAO-11-829T. Washington, D.C.: July 15, 2011. Aviation Security: TSA Has Taken Actions to Improve Security, but Additional Efforts Remain. GAO-11-807T. Washington, D.C.: July 13, 2011. Aviation Security: TSA Has Enhanced Its Explosives Detection Requirements for Checked Baggage, but Additional Screening Actions Are Needed. GAO-11-740. Washington, D.C.: July 11, 2011. Homeland Security: Improvements in Managing Research and Development Could Help Reduce Inefficiencies and Costs. GAO-11-464T. Washington D.C.: March 15, 2011. High-Risk Series: An Update. GAO-11-278. Washington D.C.: February 16, 2011. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Aviation Security: Progress Made but Actions Needed to Address Challenges in Meeting the Air Cargo Screening Mandate. GAO-10-880T. Washington, D.C.: June 30, 2010. Aviation Security: TSA Is Increasing Procurement and Deployment of Advanced Imaging Technology, but Challenges to This Effort and Other Areas of Aviation Security Remain. GAO-10-484T. Washington, D.C.: March 17, 2010. Aviation Security: DHS and TSA Have Researched, Developed, and Begun Deploying Passenger Checkpoint Screening Technologies, but Continue to Face Challenges. GAO-10-128. Washington, D.C.: October 7, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Within the Department of Homeland Security (DHS), the Transportation Security Administration (TSA) is responsible for developing and acquiring new technologies to address homeland security needs. TSA's acquisition programs represent billions of dollars in life-cycle costs and support a wide range of aviation security missions and investments including technologies used to screen passengers, checked baggage, and air cargo, among others. GAO's testimony addresses three key challenges identified in past work: (1) developing technology program requirements, (2) overseeing and conducting testing of new technologies, and (3) incorporating information on costs and benefits in making technology acquisition decisions. This statement also addresses recent DHS efforts to strengthen its investment and acquisition processes. This statement is based on reports and testimonies GAO issued from October 2009 through September 2011 related to TSA's efforts to manage, test, and deploy various technology programs. GAO's past work has found that TSA has faced challenges in developing technology program requirements on a systemic and individual basis. Program performance cannot be accurately assessed without valid baseline requirements established at the program start. In June 2010, GAO reported that over half of the 15 DHS programs (including 3 TSA programs) GAO reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, or establishing acquisition program baselines. At the program level, in July 2011, GAO reported that in 2010 TSA revised its explosive detection systems (EDS) requirements to better address current threats and plans to implement these requirements in a phased approach. However, GAO reported that some number of the EDSs in TSA's fleet are configured to detect explosives at the levels established in the 2005 requirements and TSA did not have a plan with time frames needed to deploy EDSs to meet the current requirements. GAO has also reported DHS and TSA challenges in overseeing and testing new technologies. For example, in July 2011, GAO reported that TSA experienced challenges in collecting data on the physical and chemical properties of certain explosives needed by vendors to develop EDS detection software and needed by TSA before procuring and deploying EDSs to airports. TSA and DHS Science and Technology Directorate have experienced these challenges because of problems associated with safely handling and consistently formulating some explosives. The challenges related to data collection for certain explosives have resulted in problems carrying out the EDS procurement as planned. In addition, in October 2009, GAO reported that TSA deployed explosives trace portals, a technology for detecting traces of explosives on passengers at airport checkpoints, in January 2006 even though TSA officials were aware that tests conducted during 2004 and 2005 on earlier models of the portals suggested the portals did not demonstrate reliable performance in an airport environment. In June 2006, TSA halted deployment of the explosives trace portals because of performance problems and high installation costs. GAO's prior work has shown that cost-benefit analyses help congressional and agency decision makers assess and prioritize resource investments and consider potentially more cost-effective alternatives, and that without this ability, agencies are at risk of experiencing cost overruns, missed deadlines, and performance shortfalls. GAO has reported that TSA has not consistently included these analyses in its acquisition decision making. In June 2011, DHS reported that it is taking steps to strengthen its investment and acquisition management processes by implementing a decision-making process at critical phases throughout the investment life cycle. The actions DHS reports taking to address the management of its acquisitions and the development of new technologies are positive steps and, if implemented effectively, could help the department address many of these challenges. GAO is not making any new recommendations. In prior work, GAO made recommendations to address challenges related to deploying EDS to meet requirements, overseeing and conducting testing of new technologies, and incorporating information on costs and benefits in making technology acquisition decisions. DHS and TSA concurred and described actions underway to address the recommendations. |
As of September 30, 1996, DOD reported the value of its secondary inventory—consumable items and reparable parts—at $68.5 billion. Consumable items, such as clothing and medical supplies, are managed primarily by DLA. Reparable parts are generally expensive items that can be fixed and used again, such as hydraulic pumps, navigational computers, wing sections, and landing gear. Each military service manages reparable parts that are used for their operations. These management functions include determining how many parts will be needed to support operations, purchasing new parts, and deciding when broken parts need to be repaired. As shown in figure 1, aircraft reparable parts represent an estimated 59 percent of DOD’s secondary inventory. To provide reparable parts for their aircraft, the military services use extensive logistics systems that were based on management processes, procedures, and concepts that have evolved over time but are largely outdated. Each service’s logistics system, often referred to as a logistics pipeline, consists of a number of activities that play a role in providing aircraft parts where and when they are needed. These activities include the purchase, storage, distribution, and repair of parts, which together require billions of dollars of investment in personnel, equipment, facilities, and inventory. In our recent reports on the Army, the Navy, and the Air Force logistics pipelines, we highlighted many of the problems and inefficiencies associated with the services’ current logistics systems. Findings from these reports are summarized in appendix I. DOD must operate its logistics activities within the framework of various legislative provisions and regulatory requirements. Various legislative provisions govern the size, composition, and allocation of depot repair workloads between the public and private sectors. For example, the allocation of the depot maintenance workload between the public and private sectors is governed by 10 U.S.C. 2466. According to the statute, not more than 50 percent of the funds made available for depot-level maintenance and repair can be used to contract for performance by nonfederal government personnel. Other statutes that affect the extent to which depot-level workloads can be converted to private sector performance include (1) 10 U.S.C. 2469, which provides that DOD-performed depot maintenance and repair workloads valued at not less than $3 million cannot be changed to contractor performance without a public-private competition and (2) 10 U.S.C. 2464, which provides that DOD activities should maintain a government-owned and operated logistics capability sufficient to ensure technical competence and resources necessary for an effective and timely response to a national defense emergency. Another provision that may affect future DOD logistics operations is 10 U.S.C. 2474, added to the United States Code by section 361 of the Fiscal Year 1998 National Defense Authorization Act. Section 2474 requires the Secretary of Defense to designate each depot-level activity as a Center of Industrial and Technical Excellence for certain functions. The act further requires the Secretary to establish a policy to encourage the military services to reengineer their depot repair processes and adopt best business practices. According to section 2474, a military service may conduct a pilot program, consistent with applicable requirements of law, to test any practices that the military service determines could improve the efficiency and effectiveness of depot-level operations, improve the support provided by the depots for the end user, and enhance readiness by reducing the time needed to repair equipment. Further, efforts to outsource functions other than depot-level maintenance and repair must be accomplished in accordance with the requirement of the Office of Management and Budget Circular A-76, various applicable provisions of chapter 146 of title 10 of the United States Code, as well as recurring provisions in the annual DOD Appropriations Act. In November 1997, the Secretary of Defense announced the Defense Reform Initiative, which seeks to reengineer DOD support activities and business practices by incorporating many business practices that private sector companies have used to become leaner, more agile, and highly successful. The initiative calls for adopting modern business practices to achieve world-class standards of performance in DOD operations. The Secretary of Defense stated that reforming DOD support activities is imperative to free up funds to help pay for high priorities, such as weapons modernization. We previously reported that several commercial airlines have cut costs and improved customer service by streamlining their logistics operations. The most successful improvements include using highly accurate information systems to track and control inventory; employing various methods to speed the flow of parts through the pipeline; shifting certain inventory tasks to suppliers; and having third parties handle parts repair, storage, and distribution functions. One airline, British Airways, has substantially improved its logistics operations over a 14-year period. British Airways approached the process of change as a long-term effort that requires steady vision and a focus on continual improvement. Although the airline has reaped significant gains from improvements, it continued to reexamine operations and make improvements to its logistics system. Adopting practices similar to British Airways and other commercial airlines could help DOD’s repair pipelines become faster and more responsive to customer needs. British Airways used a supply-chain management approach to reengineer its logistics system. With this approach, the various activities encompassed by the logistics pipeline were viewed as a series of interrelated processes rather than isolated functional areas. For example, when British Airways began changing the way parts were purchased from suppliers, it considered how those changes would affect mechanics in repair workshops. British Airways officials described how a combination of supply-chain improvements could lead to a continuous cycle of improvement. For example, culture changes, improved data accuracy, and more efficient processes all lead to a reduction in inventories and complexity of operations. These reductions, in turn, improve an organization’s ability to maintain accurate data. The reductions also stimulate continued change in culture and processes, both of which fuel further reductions in inventory and complexity. Despite this integrated approach, British Airways’ transformation did not follow a precise plan or occur in a rigid sequence of events. Rather, according to one manager, airline officials took the position that doing nothing was the worst option. After setting overall goals, airline officials gave managers and employees the flexibility to continually test new ideas to meet those goals. Four specific practices used by British Airways and other airlines that appear to be suited to DOD operations to the extent they can be implemented within the existing legislative and regulatory framework include the (1) prompt repair of items, (2) reorganization of the repair process, (3) establishment of partnerships with key suppliers, and (4) use of third-party logistics services. These initiatives are interrelated and, when used together, can help maximize a company’s inventory investment, decrease inventory levels, and provide a more flexible repair capability. They appear to address many of the same problems DOD faces and represent practices that could be applied to its operations. We recommended in our reports that DOD test these concepts in an integrated manner to maximize their potential benefits. Certain airlines begin repairing items as quickly as possible, which prevents the broken items from sitting idle for extended periods. Minimizing idle time helps reduce inventories because it lessens the need for extra “cushions” of inventory to cover operations while parts are out of service. In addition, repairing items promptly promotes flexible scheduling and production practices, enabling maintenance operations to respond more quickly as repair needs arise. Prompt repair involves inducting parts into maintenance shops soon after broken items arrive at repair facilities. However, prompt repair does not mean that all parts are fixed. The goal is to quickly fix only those parts that are needed. One commercial airline routes broken items directly to holding areas next to repair shops, rather than to stand-alone warehouses, so that mechanics can quickly access these broken parts. The holding areas also give mechanics better visibility of any backlog. It is difficult to specifically quantify the benefits of repairing items promptly because that practice is often used with other ones to speed up pipeline processes. One airline official said, however, that the airline has kept inventory investment down partly because it does not allow broken parts to remain idle. One approach to accelerate the repair process and promote flexibility in the repair shop is the “cellular” concept. Under this concept, an airline moved all of the resources that are needed to repair broken parts, such as tooling and support equipment, personnel, and inventory, into one location or repair center “cell.” This approach simplifies the repair of parts by eliminating the time-consuming exercise of routing parts to workshops in different locations. It also ensures that mechanics have the technical support to ensure that operations run smoothly. In addition, because inventory is placed near workshops, mechanics have quick access to the parts they need to complete repairs more quickly. British Airways adopted the cellular approach after determining that parts could be repaired as much as 10 times faster using this concept. Figure 2 shows a repair cell used in British Airways’ maintenance center at Heathrow Airport. Another airline that adopted this approach in its engine-blade repair shop was able to reduce repair time by 50 to 60 percent and decrease work-in-process inventory by 60 percent. Several airlines and manufacturers have worked with suppliers to improve parts support and reduce overall inventory. Two approaches—the use of local distribution centers and integrated supplier programs—specifically seek to improve the management and distribution of consumable items, such as nuts, bolts, and fuses. These approaches help ensure that the consumable items for repair and manufacturing operations are readily available, which prevents parts from stalling in the repair process and helps speed up repair time. In addition, by improving management and distribution methods, such as streamlined ordering and fast deliveries, these approaches enable firms to delay the purchase of inventory until a point that is closer to the time it is needed. Firms, therefore, can reduce their stocks of “just-in-case” inventory. Local distribution centers are supplier-operated facilities that are established near a customer’s operations and provide deliveries of parts within 24 hours. One airline that used this approach has worked with key suppliers to establish more than 30 centers near its major repair operations. These centers receive orders electronically and, in some cases, handle up to eight deliveries a day. Airline officials said that the ability to get parts quickly has contributed to repair time reductions. In addition, the officials said that the centers have helped the airline cut its on-hand supply of consumable items nearly in half. Figure 3 shows a local distribution center, located at Heathrow Airport, that is operated by the Boeing Company. Integrated supplier programs involve shifting inventory management functions to suppliers. Under this arrangement, a supplier is responsible for monitoring parts usage and determining how much inventory is needed to maintain a sufficient supply. The supplier’s services are tailored to the customer’s requirements and can include placing a supplier representative in customer facilities to monitor supply bins at end-user locations, place orders, manage receipts, and restock bins. Other services can include 24-hour order-to-delivery times, quality inspection, parts kits, establishment of data interchange links and inventory bar coding, and vendor selection management. One manufacturer that used an integrated supplier received parts 98 percent of the time within 24 hours of placing an order, which enabled the manufacturer to reduce inventories for these items by $7.4 million—an 84-percent reduction. Figure 4 illustrates how an integrated supplier could reduce or eliminate the need for at least three inventory storage locations in a typical DOD repair facility. Third-party logistics providers can be used to reduce costs and improve performance. Third-party firms take on responsibility for managing and carrying out certain logistics functions, such as storage and distribution. As a result, companies can reduce overhead costs because they no longer need to maintain personnel, facilities, and other resources that are required to do these functions in house. Third-party firms also help companies improve various aspects of their operations because these providers can offer expertise that companies often do not have the time or the resources to develop. For example, one airline contracts with a third-party logistics provider to handle deliveries and pickups from suppliers and repair vendors, which has improved the reliability and speed of deliveries and reduced overall administrative costs. The airline receives most items within 5 days, which includes time-consuming customs delays, and is able to deliver most items to repair vendors in 3 days. In the past, deliveries took as long as 3 weeks. In addition, third-party providers can assume other functions. One third-party firm that we visited, for example, can assume warehousing and shipping responsibilities and provide rapid transportation to speed parts to end users. The company can also pick up any broken parts from a customer and deliver them to the source of repair within 48 hours. In addition, this company maintains the data associated with warehousing and in-transit activities, offering real-time visibility of assets. If DOD were to adopt a combination of best practices, similar to those employed by commercial airlines, the time items spend in the services’ repair pipelines could be substantially reduced. For example, the cellular concept enables a repair shop to respond more quickly to different repair needs. An integrated supplier can provide the consumable parts needed to complete repairs faster and more reliably. Both of these concepts are needed to establish an agile repair capability, which in turn enables a company to repair items more promptly. A much faster and responsive repair pipeline would allow DOD to buy, store, and distribute significantly less inventory and improve customer service. For example, an Army-sponsored RAND study noted that reducing the repair time for one helicopter component from 90 to 15 days would reduce inventory requirements for that component from $60 million to $10 million. Figures 5 and 6 uses the Army’s pipeline for reparable parts to illustrate the potential impact that the integrated use of best practices would have on DOD’s logistics system. Figure 5 illustrates the current repair pipeline at Corpus Christi Army Depot, including the average number of days it took to move the parts we examined through this pipeline and the flow of consumable parts into the repair depot. The consumable parts flow includes hardware inventory stored in DLA warehouses and repair depot inventory, which in 1996 totaled $5.7 billion and $46 million, respectively. Despite this investment in inventory, the supply system was completely filling customer orders only 25 percent of the time. Also, as of August 1996, mechanics had more than $40 million in parts on backorder, 34 percent of which was still unfilled after 3 months. In addition, reparable parts flowing through this system took an average of 525 days to complete the process. Figure 6 illustrates a modified Army system, incorporating the use of an integrated supplier for consumable items, third-party logistics services, parts induction soon after they arrive at the depot, and cellular repair shops. If the military services were to adopt these practices, they could substantially reduce the number of days for a part to flow through the repair pipeline and reduce or eliminate much of the inventory in DLA and repair depot storage locations. DOD’s application of concepts such as third-party logistics and integrated suppliers, however, may require a cost comparison between government and commercial providers in accordance with Office of Management and Budget Circular A-76. This circular requires, in general, that a public-private competition must be held before contracting out of functions, activities, and services that were being accomplished by more than 10 DOD employees. Our work has consistently shown that this process is cost-effective because competition generates savings—usually through a reduction in personnel—whether the competition is won by the government or the private sector. Each of the military services has programs underway to improve logistics operations and make its processes faster and more flexible. The Army established its Velocity Management program to eliminate unnecessary steps in the logistics pipeline that delay the flow of parts through the system. The Navy is using a regionalization concept to reduce redundant capabilities in supply and maintenance and is testing a direct delivery concept for a few component parts. The Air Force established its Lean Logistics initiative to dramatically improve logistics processes. Although these initiatives have been underway for several years, the results are limited, and the overall success of these programs is uncertain. In January 1995, the Army established its Velocity Management program to develop a faster, more flexible, and more efficient logistics pipeline. The program’s goals, concepts, and top management support parallel improvement efforts found in private sector companies. The overall goal of the program is to eliminate unnecessary steps in the logistics pipeline that delay the flow of parts through the system. The Army plans to achieve this goal in a similar manner as the private sector: by changing its processes and not by refining the existing system. The Army’s Vice Chief of Staff has strongly endorsed the program as a vehicle for making dramatic improvements to the current logistics system. In anticipation of these improvements, the Army has reduced its operating budgets for fiscal years 1998 through 2003 by $156.5 million. The Velocity Management program consists of Army-wide process improvement teams for the following four areas: ordering and shipping of parts, the repair cycle, inventory levels and locations (also known as stockage determination), and financial management. For each of these areas, the Army is examining its current processes and attempting to identify ways to improve them. The Army’s implementation strategy for these improvement areas includes three phases: defining the process, measuring process performance, and improving the process. As shown in table 1, the four improvement areas are in various implementation phases. The order and shipping improvement area is in phase 3 and the farthest along in the implementation process. In this area, the Army has reduced the time it takes to order and deliver parts to a customer located in the United States from approximately 22 to 11 days, or by 50 percent. According to Army officials, this improvement was achieved by automating the ordering process and having delivery trucks dedicated to servicing a single customer. The Army plans to continue work on other functions in this area, such as the receiving process. The stockage determination and repair cycle initiatives are both in phase 2. According to Army officials, these improvement areas have not advanced as quickly as planned due to difficulties in obtaining reliable data to measure the current processes. Also, Army officials have not precisely determined what metrics to use for measuring future improvements. The financial management area, the last initiative to be started, is currently in phase 1. The Navy has three major improvement efforts underway that are aimed at reducing infrastructure costs and streamlining operations. The first initiative, called regional supply, consolidates decentralized supply management functions into seven regionally based activities. Under the old system, naval bases, aviation repair depots, and shipyards each had supply organizations to manage needed parts. These activities often used different information systems and business practices and their own personnel and facilities. This initiative does not consolidate inventories into fewer storage locations. The consolidation is intended to provide central management of spare parts for these individual operations, improve parts visibility, and reduce the overhead expenses associated with separate management functions. The Navy hopes that the centralized management approach will lead to a better sharing among locations and reductions in inventories. In fiscal year 1997, the Navy reported inventory reductions of $4.9 million through its regional supply program, and it expects to reduce inventories by an additional $24 million in fiscal year 1998. The Navy expects that 90 percent of the supply management consolidations will be completed by the end of fiscal year 1998. The second initiative, called regional maintenance, similarly identifies redundant maintenance capabilities and consolidates these operations into regionally based repair facilities. For example, in one region the Navy is consolidating 32 locations used to calibrate maintenance test equipment into 4 locations. The regional maintenance program is mainly focused on reducing infrastructure costs, but its other objectives include improving maintenance processes, integrating supply support and maintenance functions, and providing compatible information systems. Through fiscal year 1996, the Navy identified a total of 102 regional maintenance initiatives: 55 were started in fiscal year 1997, and 47 are to be implemented between fiscal years 1998 and 2001. The Navy estimates that its regional maintenance efforts will save $944 million between fiscal years 1994 and 2001. We recently reported that, although the Navy has made progress in achieving its infrastructure streamlining objective under regional maintenance, the progress thus far has not been as great as anticipated and challenges remain for accomplishing future plans. Full implementation, initially projected for fiscal year 1999, is now projected for fiscal year 2000 and could take longer. Many of the initiatives identified have not been completed, and projected savings are not being achieved. For example, one initiative to consolidate planning and engineering functions for certain repairs is not progressing as planned, delaying planned personnel reductions and affecting up to $92 million in savings projected to occur between fiscal years 1998 and 2001. The Navy has classified many of its initiatives as high risk because of barriers to implementation, including institutional resistance to change, inadequate information systems, and poor visibility over maintenance-related costs. The Navy’s third initiative, called direct vendor delivery, is a logistics support technique intended to reduce the costs of the inventory management and distribution functions. Under this initiative, a contractor (typically an original equipment manufacturer) will be responsible for repairing, storing, and distributing weapon system components. The contractor agrees to meet certain delivery timeframes and supply availability rates for the components. When a component fails at an operating location, it is sent directly to the contractor rather than to a Navy repair facility. The contractor in turn ships a replacement part back to the operating location. If a future demand for the item is anticipated, then the contractor fixes the broken component so it can be used again. According to the Navy, the direct vendor delivery concept will motivate the contractor to increase the reliability of the component so it needs to be repaired less frequently, which may reduce the component’s life-cycle costs. The direct vendor delivery concept is in the early stages of development. As of January 1998, the Navy had placed only 3 subsystems, consisting of 96 components, under contract. The value of these three contracts represent about 1 percent of the Navy’s fiscal year 1998 purchase and repair budget. The Navy plans, however, to apply this concept to additional weapon system components in the future. In 1994, the Air Force initiated a reengineering effort called Lean Logistics to dramatically improve logistics processes. The Air Force describes Lean Logistics as the cornerstone of all future logistics system improvements. This effort, spearheaded by the Air Force Materiel Command, is aimed at improving service to the end user while reducing pipeline time, excess inventory, and other logistics costs. The Air Force expects to save $948 million in supply costs between fiscal years 1997 and 1999 as a result of Lean Logistics initiatives. Under Lean Logistics, the Air Force developed a program to redesign the current repair pipeline. In June 1996, the Air Force began testing certain concepts at 10 repair shops, and the tests involve less than 1 percent of the Air Force’s inventory items. The concepts include repairing items quickly after they break, using premium transportation to rapidly move parts, organizing support (supply and repair) personnel into teams, and deploying new information systems to better prioritize repair actions and track parts. Each shop tested some of these concepts and identified system improvements needed to adopt these practices on a broader scale. As part of its demonstration projects, the Air Force tracked overall performance in four general areas: customer impact, responsiveness to the customer, repair depot efficiency, and operating costs. According to an October 1997 cost-benefit analysis of these projects, the tests were not a complete success. For example, 70 percent of the shops showed improvement in depot repair efficiency, but only 10 percent of the shops showed improvements in improving the responsiveness to the customer. Also, three of the four performance areas showed mixed results for 50 percent or more of the shops. According to the Air Force analysis, full implementation of the concepts may need to be re-evaluated and refined to achieve desired improvements in customer service and operating costs. Table 2 shows the impact of the demonstration projects on the four performance areas. Notwithstanding the results of the demonstration projects, the Air Force began expanding these concepts servicewide in April 1997 and plans to complete this effort by the spring of 1998. According to the Air Force, the concepts will be refined as implementation continues. The military service’s current improvement efforts could be expanded to include a wider application of the best practices discussed in this report. In addition, the services have not established specific locations where a combination of several practices could be tested to achieve maximum benefits. These expanded efforts would be consistent with recent legislative provisions and the Defense Reform Initiative, which encourage the adoption of best business practices. However, a wider application of best practices by DOD must be accomplished within the current legislative framework and regulatory requirements. Our previous reports recommended the testing and implementation of best practices, specifically, prompt repair of items, cellular repair, supplier partnerships, third-party logistics, as well as an integrated test of these practices. The Navy and the Air Force have initiated programs to adopt certain forms of supplier partnerships, and the Air Force is pursuing the prompt repair of items throughout its operations. Table 3 summarizes the status of the services’ efforts in implementing best practices. As part of its Lean Logistics program, the Air Force has adopted the concept of prompt repair of items to help speed the flow of parts through the repair process. In February 1997, the Air Force also began using a prime vendor program to support the C-130 propeller repair shop at the Warner Robins Air Logistics Center. In fiscal year 1998, the Air Force plans to expand the prime vendor program at Warner Robins and begin programs at two other Air Force repair depots. The Navy plans to test the prime vendor concept at two depots during 1998. As of April 1997, the Army was using the cellular repair concept at two maintenance shops in the Corpus Christi Army depot. The Army, however, has not initiated any additional tests of the practices recommended in our reports at the Corpus Christi depot. Finally, none of the services have developed a plan to combine these new practices at one facility. In commenting on a draft of this report, DOD highlighted additional initiatives that it believes demonstrate the use of best commercial practices. For example, the Army is pursuing an initiative to rapidly repair 20 different circuit cards at two Army depots and return the cards using premium transportation. The Army plans to expand this concept later this year to engine components. DOD also highlighted Navy efforts to reduce the administrative lead times involved in repairing maritime parts and have a third-party provider build repair kits for hydraulic parts. In addition, DOD cited an Air Force initiative related to the contractor support for certain C-17 aircraft parts. Under this arrangement, the contractor is responsible for interim contractor support, depot repair, materiel and program management, and system modifications. Section 395 of the National Defense Authorization Act for Fiscal Year 1998 requires the Director of DLA to develop and submit to Congress a schedule for implementing best practices for the acquisition and distribution of categories of consumable-type supplies and equipment listed in the section. However, each military service manages reparable parts that are used in its operations; DLA stores and distributes these parts and manages consumable items. Each service and DLA, therefore, would be responsible for developing and implementing a strategy to adopt best practices for the items they manage if section 395 were broadened to include reparable parts. Our work shows it is feasible for the list of items covered by section 395 to be expanded to include reparable parts. For example, each of the services and DLA have initiatives underway designed to improve their logistics operations by adopting best practices. Our reports identify additional best practices that present opportunities for DOD to build on these improvement efforts. However, if section 395 were expanded, the responsibility for the development and submission of a schedule to implement these practices would go beyond the purview of the Director of DLA. Thus, expanding the list of items covered by the provisions included in section 395 would also appear to warrant broadening the responsibility for responding to the legislation to include the military services. Our previous reports recommended that DOD test and adopt best practices where feasible; therefore, we are not repeating those recommendations in this report. However, testing a combination of several key best practices is an option that DOD has yet to explore as it considers the extent to which successful techniques used in the private sector could be applied to its logistics operations. This action would be consistent with recently enacted Centers of Industrial and Technical Excellence legislation and the Defense Reform Initiative. This wider application of best practices by DOD must be accomplished within the framework of existing legislative and regulatory requirements. If Congress decides it wants to expand the provisions of section 395 to include reparable parts, it may wish to consider (1) broadening the responsibility for responding to this legislation to include the military services and (2) developing provisions, similar to those in section 395, to encourage DOD to test combinations of best practices using a supply-chain management approach. In written comments on a draft of this report, DOD agreed that further progress is possible in using best practices for reparable parts. However, DOD has concerns in two areas. First, DOD believed that our draft report did not include all ongoing initiatives by the military services to adopt best business practices in the management of reparable parts. Second, DOD did not agree with our Matters for Congressional Consideration that the Congress may wish to consider developing statutory guidance related to best practices for reparable parts. DOD believed that, because of its actions underway, statutory guidance is not needed. DOD’s comments appear in appendix II. We incorporated several of the examples DOD provided into our report. However, some of these initiatives, particularly the newly awarded contract for C-17 aircraft support, involve integrated supplier support and third-party logistics predominately on the part of the contractor. Our past work and this report have been concerned with efforts to improve the existing in-house repair pipeline through the use of proven best practices adopted in the private sector, especially for aircraft parts, once the decision has been made to keep the repair function at public facilities. This C-17 contract represents a different arrangement and we are not in a position to comment on the merits of that approach. With regard to the Matters for Congressional Consideration, our intent is to highlight two actions that we believe may be useful to Congress if it decides to expand section 395 to include reparable parts. Therefore, we modified this section to clarify our intent. We used information from our three prior reports that compared Army, Navy, and Air Force logistics practices to those of commercial airlines. For these reports, we examined operations at 20 DOD locations involved in the logistics pipeline. At these locations, we discussed with supply and maintenance personnel the operations of DOD’s current logistics system, customer satisfaction, planned improvements to the logistics system, and the potential application of private sector practices to DOD operations. We also reviewed and analyzed detailed information on inventory levels and usage, repair times, supply effectiveness and response times, and other related logistics performance measures. Unless otherwise noted, inventory values reflect DOD’s standard valuation methodology, in which excess inventory is reported at an estimated salvage value and reparable parts requiring repair are reduced by an average estimate of repair costs. We also used information from our reports to identify leading commercial practices. This information, which was collected by making an extensive literature search, and through detailed examinations and discussions of logistics practices with officials from British Airways, United Airlines, Southwest Airlines, American Airlines, Federal Express, Boeing, Northrop-Grumman Corporation, and Tri-Star Aerospace. We also participated in roundtable discussions and symposiums with recognized leaders in the logistics field to obtain information on how companies are applying integrated approaches to their logistics operations. We reviewed documents and interviewed officials on DOD’s policies, practices, and efforts to improve its logistics operations. We contacted officials at the Office of the Deputy Under Secretary of Defense for Logistics, Washington, D.C.; Army Headquarters, Washington, D.C.; Army Materiel Command, Alexandria, Virginia; Naval Supply Systems Command, Mechanicsburg, Pennsylvania; Naval Inventory Control Point, Mechanicsburg, Pennsylvania; Air Force Headquarters, Washington, D.C.; and Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio. Also, officials at these locations provided us with detailed information on their efforts to adopt the specific best practices we recommended in prior reports. We conducted our review from December 1997 to January 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to other congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Directors of the Defense Logistics Agency and the Office of Management and Budget; and other interested parties. We will also make copies available to others on request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. The Department of Defense’s (DOD) depot repair pipelines for reparable parts are slow and inefficient. Since February 1996, we have issued three reports that compared commercial logistics practices with similar Army, Navy, and Air Force operations for reparable aircraft parts. In these reports, we highlighted four factors that contributed to the services’ slow and inefficient repair pipelines. These factors are (1) broken reparable parts move slowly between field units and a repair depot, (2) reparable parts are stored in warehouses for several months before and after they are repaired, (3) work processes at repair depots are inefficiently organized, and (4) consumable parts are not frequently available to mechanics when needed. As a result, the services can spend several months or even years to repair and distribute repaired parts to the end user. The amount of time it takes to repair parts is important because DOD must invest in enough inventory to resupply units with serviceable parts during the time it takes to move and repair broken parts. In April 1997, we reported that the Army’s current repair pipeline, characterized by a $2.6-billion investment in aviation parts, is slow and inefficient. To calculate the amount of time the Army system takes to repair and distribute parts using the current depot repair process, we judgmentally selected 24 types of Army aviation parts and computed the time the parts spent in four key segments of the repair process. The key segments were (1) preparing and shipping the parts from the bases to the depot, (2) storing the parts at the depot before induction into the repair shop, (3) repairing the parts, and (4) storing the parts at the depot before being shipped to a field unit. The parts we selected took an average of 525 days to complete the repair process. The fastest time the Army took to complete any of the four pipeline segments was less than 1 day, but the slowest times ranged from 887 to more than 1,000 days. Table I.1 details the fastest, slowest, and average times the Army needed to complete each of the four pipeline segments. A comparison of the Army’s engineering estimate of the repair time that should be needed to complete repairs with the actual amount of time taken is a measure of repair process efficiency. Of the 525-day average pipeline time from our sample, the Army estimates that an average of 18 days should be needed to repair items. The remaining 507 days, or 97 percent of the total time, was spent transporting or storing parts or was due to unplanned repair delays. Another measure of repair process efficiency is a calculation of how often an organization uses its inventory, called the turnover rate. The higher the turnover rate, the more often a company is utilizing its inventory. At British Airways, the inventory turnover rate for reparable parts was 2.3 times each year. In comparison, we calculated that the Army’s turnover rate for fiscal year 1995 repairs was 0.4 times, or about 6 times slower than British Airways. In July 1996, we reported that the Navy’s system, characterized by a $10 billion inventory of reparable parts, is slow and complex and often does not respond quickly to customer needs. For example, customers wait an average of 16 days at operating bases and 32 days on aircraft carriers to receive parts from the wholesale system. If the wholesale system does not have the item in stock, customers wait over 2-1/2 months. Many factors contribute to this situation, but among the most prominent is a slow and complex repair pipeline. Within this pipeline, broken parts can pass through as many as 16 steps, taking as long as 4 months, before they are repaired at a repair depot and are available again for use. Specific problems that prevent parts from flowing quickly through the pipeline include a lack of consumable parts needed to complete repairs, slow distribution, and inefficient repair practices. For example, the Navy’s practice of routing parts through several workshops at repair depots increases the time needed to complete repairs. One item we examined had a repair time of 232 hours, only 20 hours of which was spent actually repairing the item. The remaining 212 hours, or 91 percent of the total time, was spent handling and moving the part to different locations. In contrast, leading firms in the airline industry, including British Airways, hold minimum levels of inventory that can turn over four times as often as the Navy’s. Parts are more readily available and delivered to the customer within hours. The repair process is faster, taking an average of 11 days for certain items at British Airways compared with the Navy’s 37-day process for a similar type of part. Table I.2 compares several key logistics performance measures of British Airways and the Navy. Key performance measure British Airways (1994) Navy (1995) In February 1996, we reported that Air Force had invested about $36.7 billion in aircraft parts. Of this amount, the Air Force estimated $20.4 billion, or 56 percent, was needed to support daily operations and war reserves, and the remaining $16.3 billion was divided among safety stock, other reserves, and excess inventory. These large inventory levels were driven in part by the slow logistics pipeline process. For example, one part we examined had an estimated repair cycle time of 117 days; it took British Airways only 12 days to repair a similar part. We reported that the complexity of the Air Force’s repair and distribution process creates as many as 12 different stopping points and several layers of inventory as parts move through the process. Parts can accumulate at each step in the process, which increases the total number of parts in the pipeline. Figure I.1 compares the Air Force’s pipeline times with British Airways’ times for a landing gear component. C. I. (Bud) Patton, Jr. Kenneth R. Knouse, Jr. Defense Inventory Management: Expanding Use of Best Practices for Hardware Items Can Reduce Logistics Costs (GAO/NSIAD-98-47, Jan. 20, 1998). Inventory Management: Greater Use of Best Practices Could Reduce DOD’s Logistics Costs (GAO/T-NSIAD-97-214, July 24, 1997). Inventory Management: The Army Could Reduce Logistics Costs for Aviation Parts by Adopting Best Practices (GAO/NSIAD-97-82, Apr. 15, 1997). Defense Inventory Management: Problems, Progress, and Additional Actions Needed (GAO/T-NSIAD-97-109 Mar. 20, 1997). Inventory Management: Adopting Best Practices Could Enhance Navy Efforts to Achieve Efficiencies and Savings (GAO/NSIAD-96-156, July 12, 1996). Best Management Practices: Reengineering the Air Force’s Logistics System Can Yield Substantial Savings (GAO/NSIAD-96-5, Feb. 21, 1996). Inventory Management: DOD Can Build on Progress in Using Best Practices to Achieve Substantial Savings (GAO/NSIAD-95-142, Aug. 4, 1995). Commercial Practices: DOD Could Reduce Electronics Inventories by Using Private Sector Techniques (GAO/NSIAD-94-110, June 29, 1994). Commercial Practices: Leading-Edge Practices Can Help DOD Better Manage Clothing and Textile Stocks (GAO/NSIAD-94-64, Apr. 13, 1994). Commercial Practices: DOD Could Save Millions by Reducing Maintenance and Repair Inventories (GAO/NSIAD-93-155, June 7, 1993). DOD Food Inventory: Using Private Sector Practices Can Reduce Costs and Eliminate Problems (GAO/NSIAD-93-110, June 4, 1993). DOD Medical Inventory: Reductions Can Be Made Through the Use of Commercial Practices (GAO/NSIAD-92-58, Dec. 5, 1991). Commercial Practices: Opportunities Exists to Reduce Aircraft Engine Support Costs (GAO/NSIAD-91-240, June 28, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reported on the feasibility of adding reparable parts to the list of consumable-type supplies and equipment covered by Section 395 of the National Defense Authorization Act of 1998, focusing on: (1) private-sector practices that streamline logistics operations; (2) Department of Defense (DOD) initiatives to improve its logistics systems; and (3) best practices that can be used to improve the military services' aircraft reparable parts pipeline. GAO noted that: (1) it is feasible for the list of items covered by section 395 to be expanded to include reparable parts; (2) in fact, all of the services and the Defense Logistics Agency (DLA) have initiatives under way designed to improve their logistics operations by adopting best practices; (3) however, if section 395 were expanded to include reparable parts, the responsibility for the development and submission of a schedule to implement best practices would also have to be expanded to include the military services, since responsibility for service-managed reparable parts is beyond the purview of the Director of DLA; (4) private-sector companies have developed new business strategies and practices that have cut costs and improved customer service by streamlining logistics operations; (5) the most successful improvement efforts included a combination of practices that are focused on improving the entire logistics pipeline--an approach known as supply-chain management; (6) the combination of practices that GAO has observed includes the use of highly accurate information systems, various methods to speed the flow of parts through the pipeline, and the shifting of certain logistics functions to suppliers and third parties; (7) DOD recognizes that it needs to make substantial improvements to its logistics systems; (8) the Army's Velocity Management program, the Navy's regionalization and direct delivery programs, and the Air Force's Lean Logistics initiative are designed to improve logistics operations and make logistics processes faster and more flexible; (9) although these initiatives have achieved some limited success, significant opportunities for improvement remain; (10) GAO's work indicates that best practices developed by private-sector companies are compatible with DOD improvement initiatives; and (11) however, GAO recognizes the use of these best practices must be accomplished within the existing legislative framework and regulatory requirements relating to defense logistics activities, such as the Office of Management and Budget Circular A-76. |
Our analysis of Army and DFAS data through the end of fiscal year 2005 identified nearly 1,300 separated battle-injured soldiers and soldiers who were killed in combat who had military debts totaling $1.5 million that were reported to DFAS for debt collection action. Of the nearly 1,300 soldiers, almost 900 separated battle-injured soldiers had debts totaling about $1.2 million and about 400 soldiers who died in combat had debts totaling over $300,000. The actual number of separated battle-injured soldiers and fallen soldiers who owed military debts may be greater due to incomplete and inaccurate reporting of some information to the WIA databases. Overpayment of pay and allowances (entitlements), pay calculation errors, and erroneous leave payments caused 73 percent of the reported debts. Remaining debts related to requirements to repay portions of enlistment bonuses and training due to early separation and/or failure to fulfill requirements; unpaid expenses for medical services, household moves, insurance premiums, and travel advances; and lost military equipment. Because the Army lacks a centralized automated system that integrates payroll, personnel, and medical data on its soldiers, the Army and DFAS formed a Wounded in Action Support Team and created WIA databases that included soldier personnel, payroll, and medical information using weekly data calls from five separate Army systems. The Army and DFAS are using ad hoc work-around processes to research, verify, and correct incomplete and inaccurate data. These labor-intensive, manual procedures are necessary due to continuing, uncorrected weaknesses in Army personnel and payroll systems and the growing number of battle-injured soldiers whose pay accounts need to be researched and verified to determine whether overpayments or other problems have resulted in debt. As a policy, DFAS does not pursue collection of debts of fallen soldiers. However, DFAS officials told us that military debt may be satisfied from the final pay and allowances of fallen soldiers and DFAS may pursue collection of debts of other deceased soldiers. During the past 2 fiscal years, the Army pursued hundreds of battle-injured soldiers for collection of their military debts after they left the service. Collection action begins with monthly debt notification letters and escalates to credit bureau reporting and private collection agency and TOP action when there is no response or debts are not paid. At the time we initiated our audit in June 2005, the Army was taking collection action on active debts of over 300 battle-injured soldiers. Our initial analysis of Army and DFAS data as of June 30, 2005, identified 331 battle-injured soldiers, whose military service debts were undergoing collection action, including at least 74 soldiers whose military debts had been reported to credit bureaus and to private collection agencies and TOP. However, in response to our audit, Army and DFAS officials told us that they had suspended collection action on these soldiers’ debts and recalled their reports to credit bureaus and their referrals to the Department of the Treasury for private collection agency and TOP collection action until a determination could be made as to whether these soldiers’ debts were eligible for relief. We independently confirmed the recall of credit bureau reporting and Treasury referrals with those entities. DFAS records as of September 30, 2005, showed that of the $1.5 million in military service debts incurred by the nearly 1,300 battle-injured and fallen soldiers identified in our analysis, debts totaling almost $959,000 were written off, waived, or cancelled, including debts of fallen soldiers; debts totaling about $124,000 were paid; and debts totaling $420,000 remained open. In addition, at the end of our audit, the Army and DFAS advised us that waivers had been approved for active debts of 202 of the 331 separated battle-injured soldiers’ debts that were being pursued for collection when we initiated our audit in June 2005. While many soldiers had only one or two debts, other soldiers had three or more debts. The nearly 1,300 separated battle-injured soldiers and fallen soldiers identified in our analysis had a total of 2,324 debts. Debts for these soldiers grew from 404 debts totaling $128,230 at the end of fiscal year 2002 to 2,324 debts totaling over $1.5 million at the end of fiscal year 2005. As shown in table 1, the number of debts generally has increased each fiscal year as more soldiers have been deployed and Army payroll problems remained unresolved. More than 40 percent of these soldier debts totaling over half of the $1.5 million were incurred during fiscal year 2005. Previously, we reported that most soldier payroll problems related to Army National Guard and Army Reserve soldiers. Our analysis of military service debts of the nearly 1,300 separated Army battle-injured soldiers and fallen soldiers showed that for the first 4 years of the GWOT deployment, 661 (51 percent) of the debts related to active component Army soldiers, 346 (about 27 percent) of the debts related to Army National Guard soldiers, and 248 (about 19 percent) of the debts related to Army Reserve soldiers. The field units that reported debts for the remaining 35 Army soldiers (about 3 percent) did not identify these soldiers by component. Table 2 shows the relative number and amount of debts by component. Because Congress passed legislation that permitted the Secretary of Defense to cancel up to $2,500 in individual soldier debt during Desert Shield/Desert Storm, your offices asked us to determine the dollar amount of debts of separated battle-injured and fallen soldiers by incremental thresholds. Our analysis of the amounts of debt reported for separated battle-injured soldiers and fallen soldiers who served in OIF and OEF during fiscal years 2002 through 2005 showed that about 82 percent of these soldiers had debts that totaled $1,500 or less and the vast majority, about 90 percent of the soldiers, had debts that totaled $2,500 or less. While making this comparison, it is appropriate that debt relief is adjudicated prudently in consideration of individual circumstances. Table 3 shows the stratification of battle-injured and fallen soldier debt in $500 increments up to $3,500 and total amounts over $3,500. Ninety soldiers had debts that totaled more than $3,500, including original soldier debts that ranged from $3,528 to $34,124. Sixty-seven of these soldiers had debts that totaled less than $10,000, 16 soldiers had debts totaling between $10,000 and $20,000, and 7 soldiers had debts that totaled more than $20,000. Consistent with our case studies, which are discussed in the next section, DOD data showed that most of the debts of the nearly 1,300 soldiers who were injured or killed in combat related to errors in pay calculations and overpayment of combat pay entitlements and erroneous payments for unused leave. As illustrated in figure 1, Army and DFAS data for fiscal years 2002 through 2005 showed that 73 percent of the debts for the nearly 1,300 separated battle-injured soldiers and fallen soldiers related to errors in pay calculations, entitlement errors, and erroneous leave payments during fiscal years 2002 through 2005. The remaining 27 percent of these soldiers’ debts related to repayment of enlistment bonuses (11 percent) where soldiers did not complete the required term of service or they improperly received more than one bonus; payments for tuition and training (6 percent) where soldiers did not complete their training or they did not fulfill service requirements related to their training; and other expenses (8 percent) related to unpaid bills for medical services, housing and household moves, insurance premiums, travel advances, and loss or damage of government property. The reasons for the remaining debt (2 percent) were not recorded in DDMS. According to DFAS officials, while unit commanders and finance offices are authorized to write off debts for lost and damaged equipment when soldiers who were injured or killed by hostile fire are medically evacuated from the theater of operation, they have not always done so. In addition, because Army units and medical facilities have not always prepared or processed changes in orders when soldier duty status changed, soldiers do not have required documentation needed to submit a voucher for travel reimbursement. Because the travel system is not integrated with the payroll and debt management systems, neither DFAS nor the Army could tell us the amount of soldier debt that could potentially be offset by travel reimbursements owed to soldiers. The new WIA Support Team’s standard operating procedures for soldier pay account review require identification and processing of all soldier travel claims. Debt collection actions have caused a variety of problems for separated GWOT battle-injured soldiers. When these soldiers leave the Army, they generally do not have jobs and many of them face continuing medical treatment for battle injuries, making it difficult to hold a job. If these soldiers have military debt that has been identified, their final separation pay may be offset to cover the debt and they may leave the service with no funds to pay immediate expenses. Due to the lack of income, 16 of the 19 soldiers we interviewed told us that they had difficulty paying for basic household expenses. In addition, 3 soldiers told us that they were erroneously identified as AWOL by their units while they were actually in the hospital or receiving outpatient care for their war injuries. The AWOL status for at least 2 of these soldiers created debt because it appeared that the soldiers received pay when they were not in duty status. At the time these soldiers were listed as AWOL by their Army units, they were actually receiving medical treatment. One soldier was receiving outpatient therapy for her knee injury under the care and direction of an Air Force physician based on an Army medical referral and the other soldier was in a military hospital at Fort Campbell. Debt-related experiences of 19 separated battle- injured soldiers who contacted us included the following. Sixteen soldiers had their military debts reported to credit bureaus, 9 soldiers had debts turned over to private collection agencies, and 8 soldiers had their income tax refunds withheld under TOP. Sixteen could not pay their basic household expenses. Four soldiers were unable to obtain loans to purchase a home, meet other needs, or obtain VA educational benefits due to service-related debt on their credit reports. At least 8 soldiers were owed travel reimbursements at the same time they were being pursued for collection of their service-related debts. The Army’s failure to record separation paperwork in the pay system and other payment errors resulted in over $12,000 of debt for one severely battle-injured soldier. Although the soldier’s family expected that he would receive retirement pay when his Army pay stopped upon his separation, the soldier had no income for several months while the Army attempted to recover his military debt. As a result, his family was unable to pay household bills, the utilities were shut off, and the soldier’s dependent daughter was sent out of state to live with relatives. In addition, although the soldier had been receiving treatment at an Army medical center and a VA polytrauma center over a 5-month period, the day the soldier was released to go home, his Army unit called his wife to ask why he was not reporting for duty—an indication that his Army unit had considered him to be AWOL. Table 4 illustrates examples of the effects of debt collection actions on selected separated Army battle-injured soldiers and their families based on our case studies. Five soldiers and family members told us that they had contacted their unit finance offices multiple times for assistance in resolving their pay and debt problems. However, the soldiers said that finance personnel either did not get back to them as promised or the finance personnel they spoke with said they could not help them with their problems. DFAS and Army officials we spoke with acknowledged that finance office personnel at some locations lacked the knowledge needed to accurately input transactions to soldier pay accounts. DFAS officials told us they recently initiated actions to train finance office personnel at several locations. Debts imposed the greatest hardship on battle-injured soldiers who have had to endure financial problems while they cope with adjusting to physical limitations due to their injuries. The following case summaries provide additional details of selected soldier debt experiences. The first soldier, case study #1, battled for 1-1/2 years after he separated from the service to resolve his debts and obtain a reimbursement related to travel expense during his deployment. Soldier Engages in 1-1/2 Year Battle to Resolve Debts An Army Reserve Staff Sergeant who lost his leg in a roadside bomb explosion near the town of Ramadi, Iraq, on July 14, 2003, found himself involved in a lengthy effort to resolve pay-related debts after he separated from the Army in August 2004. The Sergeant’s Army debt was the only unpaid debt on his credit report. The first problem occurred in August 2004, when the Army failed to terminate the soldier’s active duty pay after he separated from military service, resulting in an overpayment of $2,728. Because the soldier was still owed his final separation pay of $2,230, this amount was used to offset his debt, reducing it to $498. The Army also incorrectly billed the soldier for several months of Servicemen’s Group Life Insurance (SGLI) premiums, which should have ceased when the soldier left the service. In attempting to correct the SGLI billings, the soldier’s account was mistakenly reactivated in the pay system because a Fort Belvoir finance clerk did not know how to handle this transaction. As a result, the system then generated an erroneous pay check to the soldier totaling $1,733, increasing his debt to $2,231. According to the soldier, around the same time, in January 2005, an Army headquarters official contacted him to say his debt had been resolved, leading the soldier to believe that the $1,733 payment was the result of his pay audit and possibly included his unpaid travel reimbursement. However, shortly thereafter, the soldier began receiving debt collection letters from DFAS for the $2,231 debt, which also appeared on his credit report. The soldier appealed this debt and requested a waiver, but was turned down due to a ruling that he should have known that he was not entitled to another pay check once he had been out of the service for 4 months. Because lenders view unpaid federal debts as a significant problem, the soldier and his wife decided to forego applying for a loan to purchase a house until his Army debt was resolved. According to DFAS officials, it took about 6 months to research changes in the soldier’s duty status and pay the soldier’s travel reimbursement. Because the soldier had not been issued any orders after his initial deployment, DFAS had to work with the Army to prepare and backdate military orders for each change of status from the time the soldier was medically evacuated to Landstuhl Regional Medical Center in Germany, transferred to Walter Reed Army Medical Center, in Washington, D.C., and entered into the Medical Retention Program. In addition, we learned that the soldier also received erroneous monthly billings for Survivor Benefit Program (SBP) premiums—even though he and his wife had declined participation in writing, as required, when he separated. The monthly SBP billings continued because Walter Reed had not forwarded the soldier’s paperwork to the SBP program office at DFAS Cleveland. In December 2005, the soldier’s second, more detailed request for debt waiver was accepted. In addition, his travel voucher was approved and he received his contingency travel reimbursement of $2,727—an amount that exceeded his debt by almost $500. However, the soldier’s SBP program election was not properly canceled because a change was made to only one of two codes that needed to be changed in the system. As a result, the soldier’s final debt was not corrected until February 2006—1-1/2 years after he separated. Case study #2 involved a seriously injured Army National Guard soldier who went without pay for several months when his separation paperwork was not entered in the pay and personnel systems. Brain-Damaged Soldier Goes without Pay Due to Error This Army National Guard Staff Sergeant was injured 3 months after being deployed to Iraq, when his Humvee was hit by another truck during an attack on December 11, 2004. The soldier suffered a crushed jaw and severe head injuries, resulting in permanent brain damage. The soldier was sent to Walter Reed Army Medical Center in Washington, D.C., where he remained in a coma for over 3 months. On March 28, 2005, he was transferred to the Richmond VA Medical Center for care in their polytrauma rehabilitation center. On April 28, 2005, the soldier was sent home on convalescent leave before he returned to Walter Reed for further surgery. The soldier was released to go home in May 2005, pending separation from the service. On the day he was released from Walter Reed and sent home, the soldier’s wife got a call from his Army unit asking why her husband was not reporting to active duty—an indication that the soldier’s unit believed him to be AWOL. Although the soldier had been through medical board evaluations and was supposed to be retired effective July 23, 2005, his separation paperwork was not entered in the pay and personnel systems. The soldier was rated 80 percent disabled and his family expected to receive disability benefit income of over $3,000 per month. When the sergeant suddenly received no income in October 2005, he learned that he owed the Army a debt of $6,400 and that the paperwork to start his disability benefits had not been processed. About this time, a finance clerk noted that the sergeant had not been paid for his unused leave. Because the finance clerk did not know how to post the leave payment transaction, the clerk put the soldier back on active duty, resulting in an additional overpayment of $6,101, and increasing his debt to $12,501. According to a family member, the soldier’s family was without income and could not pay for basic household expenses. As a result, the family’s utilities were cut off and the soldier’s 11-year-old daughter was sent out of state to live with relatives. After receiving a call from the soldier’s family member in mid-October 2005, we alerted Army headquarters to the soldier’s debt pay and debt problems. The Army took immediate action to research the soldier’s pay account. On January 25, 2006, DOD approved a waiver of $12,662 debt, and DFAS refunded $2,355 in debt previously withheld from the soldier’s pay. An Army Reserve soldier, case study #4, was faced with debt due to an erroneous AWOL report while she was receiving treatment at a private health facility under direction of an Air Force physician. Soldier Finds Debt Is Due to Erroneous AWOL Report During Rehabilitation An Army Reserve Specialist was injured during a mortar attack on the outskirts of Baghdad on March 23, 2003, and was awarded a Purple Heart. The soldier underwent a total of six surgeries at a field hospital and military hospitals in Kuwait, Spain, and Germany—none of which were successful in removing shrapnel from her knee. She was then flown to a military hospital in Baltimore, Maryland, and in early April 2003, she was sent to a military hospital at Keesler Air Force Base, in Biloxi, Mississippi, for treatment. At Keesler, the soldier was given the choice of receiving rehabilitative treatment at the Keesler medical facility or at a rehabilitation center near her home in Leakesville, Mississippi. There were no Army facilities near Keesler, and the soldier was told she would have to rent an apartment nearby and pay for it herself. As a result, the soldier decided to return home to begin her rehabilitation sessions at a private facility approved by Keesler. The soldier was required to travel to the Keesler AFB Orthopedic Center (a 2- hour round trip) every 2 weeks to be examined by the referring Air Force physician. The soldier told us the Air Force doctor released her in July 2003, noting that she had completed her rehabilitation treatment. The soldier was medically discharged from her Army Reserve unit on November 18, 2003. The soldier learned she had a military debt of $1,575, including $975 related to a requirement to repay the unearned portion of her enlistment bonus when a collection agent contacted her in January 2004—2 months after she had separated from the Army. As a result of this contact, the soldier learned that her Army unit had lost track of her and had reported her as AWOL while she was being treated for her battle injuries. However, the soldier told us that in April 2003, when she arrived at Keesler, she had made several unsuccessful attempts to let her unit Sergeant know her duty status and whereabouts. When her calls were returned in July 2003, she was told to report to Fort Stewart, Georgia, and to remain there until her unit returned from Iraq and was demobilized. The soldier told us she did as ordered and was placed in Medical Hold status at Fort Stewart. Although the soldier told us she traveled to her unit in Brookhaven, Mississippi, on two occasions in an effort to document that she was not AWOL because she was at an approved medical facility during the time in question, she was unsuccessful because the collection agent continued to call her. As of the end of July 2004, DFAS records showed the soldier’s debt totaled $1,575, including $975 related to the unearned portion of her enlistment bonus and $600 in overpayment of her hardship duty pay. Although DFAS had recalled this debt from the soldier’s credit report in July 2005, as of October 2005 this debt still showed on her credit report. We confirmed that DFAS recalled the debt from the soldier’s credit report a second time. However, in March 2006, the debt reappeared on the soldier’s credit report. The soldier told us that she was unable to get a loan for $500 to pay off her credit card balance because the military debt kept showing up on her credit report. At the end of our field work, the Army advised that the reappearance of military debt on the soldier’s credit report was due to errors made by both DFAS and the credit bureau. Our past four reports have discussed numerous problems related to Army pay and travel reimbursements and made over 80 recommendations for correcting weaknesses in human capital, processes, and systems that caused these problems. Effective action to address pay and travel reimbursement problems will also help prevent the occurrence of military debts. As a result of concern regarding the indebtedness of soldiers resulting from pay-related problems during deployments, Congress on occasion has provided authority to the Secretary of Defense to cancel such debts. For example, in the Department of Defense Appropriation Acts for fiscal years 1992 through 1996, the Secretary was given authority to cancel military debt up to $2,500 owed by soldiers or former soldiers so long as the indebtedness was incurred in connection with Operation Desert Shield/Desert Storm. Further, these appropriation acts authorized the Secretary to provide refunds to soldiers who had satisfied their debts. Facing similar concerns with military debts incurred by GWOT soldiers, Congress recently gave the Secretary authority, in the National Defense Authorization Act for Fiscal Year 2006, to cancel debts occurring on or after October 7, 2001, the date designated as the beginning of the OIF/OEF deployment. However, unlike the authority granted to provide debt relief for Operation Desert Shield/Desert Storm, the Secretary’s discretion under the fiscal year 2006 authorization act is generally more limited. For example, the Secretary was not given authority to issue refunds and he can not uniformly provide debt relief to all GWOT soldiers. Rather, the Secretary may only cancel debts of soldiers who are (1) on active duty or in active status; (2) within 1 year of an honorable discharge; or (3) within 1 year of active release from active status in a Reserve component. Additionally, the Secretary’s authority under the fiscal year 2006 authorization act terminates on December 31, 2007, and a more narrow statutory cancellation authority will be revived. There are two primary mechanisms in law to forgive soldier debt, including (1) authority to waive debts that result from payroll, travel, and other payment and allowance errors and (2) authority for remission (forgiveness) of debts involving hardship or fairness. The Fiscal Year 2006 National Defense Authorization Act broadened remission authority to include debts of officers and any soldiers no longer on active duty for up to 1 year. However, the remission authority in the act does not cover soldiers who were released from active duty more than 1 year ago and the waiver authority does not cover cancellation of debts due to error after the applicable 3-year statute of limitations. In addition, unlike waivers, soldiers who paid debts are not eligible for refunds under the remission statute. Further, the debt relief eligibility period for the three case study soldiers who separated in June 2005 will expire in the next few months. Our case studies showed that some battle-injured soldiers did not receive debt notification letters until 8 to 10 months after they separated from the Army. One soldier who separated in October 2004 told us that he received his debt notification letter in November 2005—more than 1 year after he separated from the Army. All but three of our case study soldiers separated from the Army more than 1 year ago and these soldiers’ eligibility for debt relief under the fiscal year 2006 authorization act has already expired. Further, the debt relief eligibility period for the three case study soldiers who separated in June 2005 will expire in the next few months. Since the OIF/OEF deployment in October 2001, separated, battle-injured Army soldiers have faced considerable hardships related to collection action on their military service debts through no fault of their own, including forfeiture of separation pay and tax refunds; credit bureau reporting; and action by private collection agencies. The best solution to this problem is for DOD to prevent debts for these soldiers from happening in the first place, and our past reports have included numerous recommendations for correcting weaknesses in Army payroll systems and processes. Over the past year, DOD and the Army have taken a number of actions to identify and relieve debts of separated battle-injured GWOT soldiers, and Congress has enacted broader authority for relief of some soldier debts. There are additional actions available to Congress if it wishes to make debt relief more soldier friendly. Because of a restriction in the current law, injured Army GWOT soldiers who separated from the service at different times have been treated differently, which raises questions of equity. Some of these soldiers may obtain debt relief, while others may not. Further, there is no current authority to issue refunds to battle-injured soldiers who previously paid debts that are now eligible for relief. Because the current debt relief authority expires on December 31, 2007, injured soldiers and their families who have GWOT-incurred military debts could face the prospect of bad credit reports, visits by collection agencies, and offsets of their tax refunds if the authority is not available throughout the OIF/OEF deployment and a reasonable period after the deployment ends. There are several matters that Congress should consider if it wishes to strengthen the Secretary’s authority to provide debt relief so that it can be applied uniformly for all GWOT-incurred debt. First, Congress could consider legislation to grant DOD the following authorities. Give the Service Secretaries authority to make debt relief available to all injured GWOT soldiers regardless of when they separate from active duty. Give the Service Secretaries authority to provide refunds to soldiers who have paid debts incurred while in an active status. Ensure that the Secretary of Defense has authority to cancel GWOT- incurred debt throughout, and a reasonable period following, the deployment and thus, can exempt injured soldiers from debt collection action through credit bureau reporting and private collection agency and TOP action. Second, we suggest that Congress consider directing the Secretary of Defense to take the following actions, as appropriate, in concert with any changes to debt relief provisions in law. Take immediate action to make debt relief policy applicable to all GWOT soldiers who sustain battle injuries or are killed in combat-related actions. Identify the military debts of battle-injured soldiers that were previously paid and were not subject to remission or waiver and issue refunds. We provided a draft of our report to DOD for comment. In oral comments received from the Office of the Secretary of Defense, the department concurred with our report. We are sending copies of this letter to interested congressional committees; the Secretary of Defense; the Deputy Under Secretary of Defense for Personnel and Readiness; the Under Secretary of Defense Comptroller; the Secretary of the Army; the Director of the Defense Finance and Accounting Service; and the Director of the Office of Management and Budget. We will make copies available to others upon request. Please contact Gregory D. Kutz at (202) 512-7455 or [email protected], if you or your staffs have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix III. The Federal Claims Collection Act of 1966, the Debt Collection Act of 1982, the Debt Collection Improvement Act of 1996, and related federal regulations provide for collection of debts owed to the federal government, including the debts of battle-injured soldiers who had separated from the Army and fallen Army soldiers who served in the Global War on Terrorism. These laws and related federal regulations establish authority for the Department of Defense (DOD) and the Department of the Treasury to engage in federal debt collection actions. Out-of-service soldier debts occur when a soldier has separated from the service and is not receiving salary or other payments from the department that can be offset to collect debt owed to a defense agency or military service. DOD is authorized to write off debts of fallen soldiers; however, it may pursue collection of other deceased soldiers’ debts. Out-of-service debts arise from a large number of circumstances, including overpayments of pay and allowances (entitlements), such as hostile fire pay, hazardous duty pay, and family separation pay; travel advances for which expense vouchers have not been submitted; indebtedness related to public use of the DOD facilities or services, such as family medical services; and loss or damage of government property. Figure 2 illustrates the out-of-service debt collection process, including DOD actions and Department of the Treasury debt collection actions. The purpose of our audit was to determine the (1) extent to which Army soldiers serving in the Global War on Terrorism (GWOT) who were injured or killed by hostile fire and were released from active duty are having debts referred to credit bureaus and collection agencies and (2) the impact of Department of Defense (DOD) debt collection action on these soldiers and their families. You also asked us to discuss ways that Congress could make the process for collecting out-of-service debts more soldier friendly. To determine the extent of debt related to Army soldiers who served in Operation Iraqi Freedom and Operation Enduring Freedom and sustained battle injuries and left the service or were killed in action, we compared Army Wounded in Action and Killed in Action databases (referred to collectively in this report as WIA databases) maintained by the Defense Finance and Accounting Service (DFAS) Wounded-in-Action Support Team and compared the soldier records in these databases with debt records in the Defense Debt Management System (DDMS) for out-of- service personnel. Soldier records are identified by soldier name and social security number in both the WIA databases and DDMS. The data used in our audit covered fiscal years 2002 through 2005—the first 4 years of the Operation Iraqi Freedom and Operation Enduring Freedom deployments. We assessed the reliability of data obtained from the WIA databases and the DDMS systems by obtaining an understanding of the processes used to collect and report the data, verifying control totals of data extracted and used for file comparisons, validating the computer program used to perform the file comparison, asking systems officials to complete our data reliability questionnaire, and analyzing selected transaction data for accuracy. We also considered findings and recommendations related to payroll problems and unidentified soldier debt in our previous audits and our recent Fort Bragg investigation. DFAS and the Army have implemented procedures for reviewing and correcting soldier status and pay account information in the WIA databases and DDMS data is subjected to periodic DOD audits. To determine the impact of debt collection actions on Army battle-injured and fallen soldiers and their families, we reached out to WIA soldiers and invited soldiers to contact us and share their experience. We focused on soldiers whose debts had been reported to credit bureaus and collection agencies. We were contacted by 19 separated battle-injured Army GWOT soldiers. We used the experiences of these soldiers to illustrate the hardships posed by debt collection action on battle-injured soldiers and their families. For all of the soldiers with debt problems who contacted us, we worked with the Army and DFAS to help resolve their debts. Where we were unable to independently validate our case study information, we attributed it to the soldiers and family members. We analyzed the DDMS data to confirm management assertions that DFAS does not pursue collection of debts of fallen soldiers. To determine ways that Congress could help make the debt collection process more soldier friendly, we considered debt relief provisions in current law, DOD and Army policy, and the experience of soldiers who contacted us as well as information obtained for case studies included in our prior reports. We reviewed federal laws and regulations and DOD and Army policies and procedures related to debt collection and relief of debt. We met with Army, DFAS, and DOD officials to discuss their efforts to identify and resolve soldier debt. We also met with Department of the Treasury Financial Management Service (FMS) officials about their processes for collecting Army soldier debt referred by DFAS. In addition, we obtained independent confirmation from credit bureaus and FMS that DFAS had recalled credit bureau reporting and private collection agency and Treasury Offset Program referrals for WIA soldiers for active debt cases. On April 5, 2006, we requested comments on a draft of this report. We worked closely with the Army and DFAS to ensure the accuracy of the factual information in our report. We received oral comments from the Office of the Secretary of Defense on April 21, 2006, and have summarized those comments in the Agency Comments and Our Evaluation section of this report. We conducted our work from June 2005 through March 2006 in accordance with generally accepted government auditing standards. Staff making key contributions to this report include Stephen P. Donahue, Dennis B. Fauber, Gayle L. Fischer, Danielle Free, Gloria Hernandezsaunders, Wilfred B. Holloway, John B. Ledford, Barbara C. Lewis, Renee McElveen, Richard C. Newbold, John P. Ryan, and Barry Shillito. | As part of the Committee on Government Reform's continuing focus on pay and financial issues affecting Army soldiers deployed in the Global War on Terrorism (GWOT), the requesters were concerned that battle-injured soldiers were not only battling the broken military pay system, but faced blemishes on their credit reports and pursuit by collection agencies from referrals of their Army debts. GAO was asked to determine (1) the extent of debt of separated battle-injured soldiers and deceased Army soldiers who served in the GWOT, (2) the impact of DOD debt collection action on separated battle-injured and deceased soldiers and their families, and (3) ways that Congress could make the process for collecting these debts more soldier friendly. Pay problems rooted in the complex, cumbersome processes used to pay Army soldiers from their initial mobilization through active duty deployment to demobilization have generated military debts. As of September 30, 2005, nearly 1,300 separated Army GWOT soldiers who were injured or killed during combat in Iraq and Afghanistan had incurred over $1.5 million in military debt, including almost 900 battle-injured soldiers with debts of $1.2 million and about 400 soldiers who died in combat with debts of $300,000. As a policy, DOD does not pursue collection of debts of soldiers who were killed in combat. However, hundreds of battle-injured soldiers experienced collection action on their debts. The extent of these debts may be greater due to incomplete reporting. GAO's case studies of 19 battle-injured soldiers showed that collection action on military debts resulted in significant hardships to these soldiers and their families. For example, 16 of the 19 soldiers were unable to pay their basic household expenses; 4 soldiers were unable to obtain loans to purchase a car or house or meet other needs; and 8 soldiers' debts were offset against their income tax refunds. In addition, 16 of the 19 case study soldiers had their debts reported to credit bureaus and 9 soldiers were contacted by private collection agencies. Due to concerns about soldier indebtedness resulting from pay-related problems during deployments, Congress recently gave the Service Secretaries authority to cancel some GWOT soldier debts. Because of restrictions in the law, debts of injured soldiers who separated at different times can be treated differently. For example, soldiers who separated more than 1 year ago are not eligible for debt relief and soldiers who paid their debts are not eligible for refunds. Further, because this authority expires in December 2007, injured soldiers and their families could face bad credit reports, visits from collection agents, and tax refund offsets in the future. |
Since 1996, Congress has taken important steps to increase Medicare program integrity funding and oversight, including the establishment of the Medicare Integrity Program. Table 1 summarizes several key congressional actions. CMS has made progress in strengthening provider and supplier enrollment provisions, but needs to do more to identify and prevent potentially fraudulent providers and suppliers from participating in Medicare. Additional improvements to prepayment and postpayment claims review would help prevent and recover improper payments. Addressing payment vulnerabilities already identified could further help prevent or reduce fraud. PPACA authorized and CMS has implemented new provider and supplier enrollment procedures that address past weaknesses identified by GAO and HHS’s Office of Inspector General (OIG) that allowed entities intent on committing fraud to enroll in Medicare. CMS has also implemented other measures intended to improve existing procedures. Specifically, to strengthen the existing screening activities conducted by CMS contractors, the agency added screenings of categories of provider and supplier enrollment applications by risk level, contracted with new national enrollment screening and site visit contractors, began imposing moratoria on new enrollment of certain types of providers and suppliers, and issued regulations requiring certain prescribers to enroll in Medicare. CMS and OIG issued a final rule in February 2011 to implement many of the new screening procedures required by PPACA. CMS designated three levels of risk—high, moderate, and limited—with different screening procedures for categories of Medicare providers and suppliers at each level. Providers and suppliers in the high-risk level are subject to the most rigorous screening. (See table 2.) Based in part on our work and that of OIG, CMS designated newly enrolling home health agencies and suppliers of durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) as high risk, and designated other providers and suppliers as lower risk levels. Providers and suppliers at all risk levels are screened to verify that they meet specific requirements established by Medicare, such as having current licenses or accreditation and valid Social Security numbers. High- and moderate-risk providers and suppliers are also subject to unannounced site visits. Further, depending on the risks presented, PPACA authorizes CMS to require fingerprint- based criminal history checks. In March 2014, CMS awarded a contract that is to enable the agency to access Federal Bureau of Investigation information to help conduct those checks of high-risk providers and suppliers. PPACA also authorizes the posting of surety bonds for certain providers and suppliers. CMS has indicated that the agency will continue to review the criteria for its screening levels and will publish changes if the agency decides to update the assignment of screening levels for categories of Medicare providers and suppliers. Doing so could become important because the Department of Justice (DOJ) and HHS reported multiple convictions, judgments, settlements, or exclusions against types of providers and suppliers not currently at the high-risk level, including community mental health centers and ambulance suppliers. CMS’s implementation of accreditation for DMEPOS suppliers, and of a competitive bidding program, including in geographic areas thought to have high fraud rates, may be helping to reduce the risk of DMEPOS fraud. While continued vigilance of DMEPOS suppliers is warranted, other types of providers may become more problematic in the future. Specifically, in September 2012 we reported that a range of providers have been the subjects of fraud investigations. According to 2010 data from OIG and DOJ, over 10,000 providers and suppliers that serve Medicare, Medicaid, and Children’s Health Insurance Program beneficiaries were involved in fraud investigations, including not only home health agencies and DMEPOS In addition, suppliers but also physicians, hospitals, and pharmacies.the provider type constituting the largest percentage of subjects in criminal health care fraud investigations was medical facilities—including medical centers, clinics, or practices—which constituted almost a quarter of subjects in such investigations. DMEPOS suppliers made up a little over 16 percent of subjects. We are currently examining the ability of CMS’s provider and supplier enrollment system to prevent and detect the continued enrollment of ineligible or potentially fraudulent providers and suppliers in Medicare. Specifically, we are assessing the process used to enroll and verify the eligibility of Medicare providers and suppliers in Medicare’s Provider Enrollment, Chain, and Ownership System (PECOS) and the extent to which CMS’s controls are designed to prevent and detect the continued enrollment of potentially ineligible or fraudulent providers and suppliers in PECOS. We plan to issue a report this winter. CMS contracted with two new types of entities at the end of 2011 to assume centralized responsibility for two functions that had been the responsibility of multiple contractors. One of the new contractors is conducting automated screenings to check that existing and newly enrolling providers and suppliers have valid licensure, accreditation, and a National Provider Identifier (NPI), and are not on the OIG list of providers and suppliers excluded from participating in federal health care programs. The second contractor conducts site visits of providers and suppliers, except for DMEPOS suppliers, to determine whether sites are legitimate and the providers and suppliers meet certain Medicare standards. A CMS official reported that, since the implementation of the PPACA screening requirements, the agency had revoked over 17,000 suspect providers’ and suppliers’ ability to bill the Medicare program. CMS has suspended enrollment of new home health providers and ground ambulance suppliers in certain fraud “hot spots” and other geographic areas. In July 2013, CMS first exercised its authority granted by PPACA to establish temporary moratoria on enrolling new home health agencies in Chicago and Miami, and new ambulance suppliers in Houston. In January 2014, CMS extended its first moratoria and added enrollment moratoria for new home health agency providers in Fort Lauderdale, Detroit, Dallas, and Houston, and new ambulance suppliers in Philadelphia. These moratoria are scheduled to be in effect until July 2014, unless CMS extends or lifts them. CMS officials cited areas of potential fraud risk, such as a disproportionate number of providers and suppliers relative to beneficiaries and extremely high utilization as rationales for suspending new enrollments of home health providers or ground ambulance suppliers in these areas. CMS recently issued a final rule requiring prescribers of drugs covered within Medicare’s prescription drug program, Part D, to enroll in Medicare by June 2015. As a result of this rule, CMS is to screen these prescribers to verify that they meet specific requirements, such as having current licenses or accreditation and valid Social Security numbers. OIG has identified concerns with CMS oversight of fraud, waste, and abuse in Part D, including the contractors tasked with this work. A June 2013 OIG report found that the Part D program inappropriately paid for drugs ordered by individuals who clearly did not have the authority to prescribe, such as massage therapists, athletic trainers, home contractors, and OIG recommended, among other things, that there should interpreters.be verification of prescribers’ authority to prescribe drugs, and that CMS should ensure that Medicare does not pay for prescriptions from individuals without such authority. CMS agreed with OIG’s recommendations and, in discussing the final rule, stated that this new enrollment requirement is to help ensure that Part D drugs are prescribed only by qualified physicians and eligible professionals. To continue to help address potential vulnerabilities in the Part D program, we are currently examining practices for promoting prescription drug program integrity and the extent to which CMS’s oversight of Medicare Part D reflects those practices. We plan to issue a report this fall. Although CMS has taken many needed actions, we and OIG have found that CMS has not fully implemented other enrollment screening actions authorized by PPACA. These actions could help further reduce the enrollment of providers and suppliers intent on defrauding the Medicare program, which is important because identifying and prosecuting providers and suppliers engaged in potentially fraudulent activity is time consuming, resource intensive, and costly. These actions include issuing a rule to implement surety bonds for certain providers and suppliers, issuing a rule on provider and supplier disclosure requirements, and establishing the core elements for provider and supplier compliance programs. PPACA authorized CMS to require a surety bond for certain types of at- risk providers and suppliers. Surety bonds may serve as a source for recoupment of erroneous payments. DMEPOS suppliers are currently required to post a surety bond at the time of enrollment. CMS reported in April 2014 that it had not yet scheduled for publication a proposed rule to implement the PPACA surety bond requirement for other types of at- risk providers and suppliers—such as home health agencies and independent diagnostic testing facilities. In light of the moratoria that CMS has placed on enrollment of home health agencies in fraud “hot spots,” implementation of this rule could help the agency address potential concerns for these at-risk providers across the Medicare program. 42 U.S.C. § 1395m(a)(16)(B). A DMEPOS surety bond is a bond issued by an entity guaranteeing that a DMEPOS supplier will fulfill its obligation to Medicare. If the obligation is not met, the surety bond is paid to Medicare. Medicare Program; Surety Bond Requirement for Suppliers of Durable Medical Equipment, Prosthetics, Orthotics, and Supplies (DMEPOS), 74 Fed. Reg. 166 (Jan. 2, 2009). suspension from a federal health care program.indicated that developing the additional disclosure requirements has been complicated by provider and supplier concerns about what types of information will be collected, what CMS will do with it, and how the privacy and security of this information will be maintained. CMS has not established the core elements of compliance programs for providers and suppliers, as required by PPACA. We previously reported that agency officials indicated that they had sought public comments on the core elements, which they were considering, and were also studying criteria found in OIG model plans for possible inclusion.2014, CMS reported that it had not yet scheduled a proposed rule for publication. Medicare uses prepayment review to deny claims that should not be paid and postpayment review to recover improperly paid claims. As claims go through Medicare’s electronic claims payment systems, they are subjected to prepayment controls called “edits,” most of which are fully automated; if a claim does not meet the criteria of the edit, it is automatically denied. Other prepayment edits are manual; they flag a claim for individual review by trained staff who determine whether it should be paid. Due to the volume of claims, CMS has reported that less than 1 percent of Medicare claims are subject to manual medical record review by trained personnel. Increased use of prepayment edits could help prevent improper Medicare payments. Our prior work found that, while use of prepayment edits saved Medicare at least $1.76 billion in fiscal year 2010, the savings could have been greater had prepayment edits been used more widely. Based on an analysis of a limited number of national policies and local coverage determinations (LCD), we identified $14.7 million in payments in fiscal year 2010 that appeared to be inconsistent with four national policies and therefore improper. We also found more than $100 million in payments that were inconsistent with three selected LCDs that could have been identified using automated edits. Thus, we concluded that more widespread implementation of effective automated edits developed by individual MACs in other MAC jurisdictions could also result in savings to Medicare. CMS has taken steps to improve the development of other types of prepayment edits that are implemented nationwide, as we recommended. For example, the agency has centralized the development and implementation of automated edits based on a type of national policy CMS has also modified its called national coverage determinations.processes for identifying provider billing of services that are medically unlikely to prevent circumvention of automated edits designed to identify an unusually large quantity of services provided to the same patient. We also evaluated the implementation of CMS’s Fraud Prevention System (FPS), which uses predictive analytic technologies as required by the Small Business Jobs Act of 2010 to analyze Medicare fee-for-service (FFS) claims on a prepayment basis. FPS identifies investigative leads for CMS’s Zone Program Integrity Contractors (ZPIC), the contractors responsible for detecting and investigating potential fraud. in July 2011, FPS is intended to help facilitate the agency’s shift from focusing on recovering potentially fraudulent payments after they have been made, to detecting aberrant billing patterns as quickly as possible, with the goal of preventing these payments from being made. However, in October 2012, we found that, while FPS generated leads for investigators, it was not integrated with Medicare’s payment-processing system to allow the prevention of payments until suspect claims can be determined to be valid. As of April 2014, CMS reported that while the FPS functionality to deny claims before payment had been integrated with the Medicare payment processing system in October 2013, the system did not have the ability to suspend payment until suspect claims could be investigated. In addition, while CMS directed the ZPICs to prioritize alerts generated by the system, in our work examining the sources of new ZPIC investigations in 2012, we found that FPS accounted for about 5 percent of ZPIC investigations in that year. A CMS official reported in March 2014 that ZPICs are now using FPS as a primary source of leads for fraud investigations, though the official did not provide details on how much of ZPICs’ work is initiated through the system. GAO, Medicare Fraud Prevention: CMS Has Implemented a Predictive Analytics System, but Needs to Define Measures to Determine Its Effectiveness, GAO-13-104 (Washington, D.C.: Oct. 15, 2012). Our prior work found that postpayment reviews are critical to identifying and recouping overpayments. The use of national recovery audit contractors (RAC) in the Medicare program is helping to identify underpayments and overpayments on a postpayment basis. CMS began the program in March 2009 for Medicare FFS. CMS reported that, as of the end of 2013, RACs collected $816 million for fiscal year 2014. PPACA required the expansion of Medicare RACs to Parts C and D. CMS has implemented a RAC for Part D, and CMS said it plans to award a contract for a Part C RAC by the end of 2014. Moreover, in February 2014, CMS announced a “pause” in the RAC program as the agency makes changes to the program and starts a new procurement process for the next round of recovery audit contracts for Medicare FFS claims. CMS stated it anticipates awarding all five of these new Medicare FFS recovery audit contracts by the end of summer 2014. Other contractors help CMS investigate potentially fraudulent FFS payments, but CMS could improve its oversight of their work. CMS contracts with ZPICs in specific geographic zones covering the nation. In October 2013, we found that the ZPICs reported that their actions, such as stopping payments on suspect claims, resulted in more than $250 million in savings to Medicare in calendar year 2012. However, CMS lacks information on the timeliness of ZPICs’ actions—such as the time it takes between identifying a suspect provider and taking actions to stop that provider from receiving potentially fraudulent Medicare payments— and would benefit from knowing whether ZPICs could save more money by acting more quickly. Thus we recommended that CMS collect and evaluate information on the timeliness of ZPICs’ investigative and administrative actions. CMS did not provide comments on our recommendation. We are currently examining the activities of the CMS contractors, including ZPICs, that conduct postpayment claims reviews, and anticipate issuing a report later this summer. Our work is reviewing, among other things, whether CMS has a strategy for coordinating these contractors’ postpayment claims review activities. CMS has taken steps to improve use of two CMS information technology systems that could help analysts identify fraud after claims have been paid, but further action is needed. In 2011, we found that the Integrated Data Repository (IDR)—a central data store of Medicare and other data needed to help CMS program integrity staff and contractors detect improper payments of claims—did not include all the data that were planned to be incorporated by fiscal year 2010, because of technical obstacles and delays in funding. As of March 2014, the agency had not addressed our recommendation, to develop reliable schedules to incorporate all types of IDR data, which could lead to additional delays in making available all of the data that are needed to support enhanced program integrity efforts and achieve the expected financial benefits. However, One Program Integrity (One PI)—a web-based portal intended to provide CMS staff and contractors with a single source of access to data contained in IDR, as well as tools for analyzing those data—is operational, and CMS has established plans and schedules for training all intended One PI users, as we also recommended in 2011. However, as of March 2014, CMS had not established deadlines for program integrity contractors to begin using One PI, as we recommended in 2011. Without these deadlines, program integrity contractors will not be required to use the system, and as a result, CMS may fall short in its efforts to ensure the widespread use and to measure the benefits of One PI for program integrity purposes. Having mechanisms in place to resolve vulnerabilities that could lead to improper payments, some of which are potentially fraudulent, is critical to effective program management, but our work has shown weaknesses in CMS’s processes to address such vulnerabilities. Both we and OIG have made recommendations to CMS to improve the tracking of vulnerabilities. In our March 2010 report on the RAC demonstration program, we found that CMS had not established an adequate process during the demonstration or in planning for the national program to ensure prompt resolution of vulnerabilities that could lead to improper payments in Medicare; further, the majority of the most significant vulnerabilities identified during the demonstration were not addressed. In December 2011, OIG found that CMS had not resolved or taken significant action to resolve 48 of 62 vulnerabilities reported in 2009 by CMS contractors specifically charged with addressing fraud. We and OIG recommended that CMS have written procedures and time frames to ensure that vulnerabilities were resolved. CMS has indicated that it is now tracking vulnerabilities identified from several types of contractors through a single vulnerability tracking process, and the agency has developed some written guidance on the process. In 2012, we examined that process and found that, while CMS informs Medicare administrative contractors (MAC) about vulnerabilities that could be addressed through prepayment edits, the agency does not systematically compile and disseminate information about effective local edits to address such vulnerabilities. Specifically, we recommended that CMS require MACs to share information about the underlying policies and savings related to their most effective edits, and CMS generally agreed to do so. In addition, in 2011 CMS began requiring MACs to report on how they had addressed certain vulnerabilities to improper payment, some of which could be addressed through edits. We also made recommendations to CMS to address the millions of Medicare cards that display beneficiaries’ Social Security numbers, which In August 2012, we increases beneficiaries’ vulnerability to identity theft.recommended that CMS (1) select an approach for removing Social Security numbers from Medicare cards that best protects beneficiaries from identity theft and minimizes burdens for providers, beneficiaries, and CMS; and (2) develop an accurate, well-documented cost estimate for such an option. In September 2013, we further recommended that CMS (1) initiate an information technology project for identifying, developing, and implementing changes for the removal of Social Security numbers; and (2) incorporate such a project into other information technology initiatives. HHS concurred with our recommendations and agreed that removing the numbers from Medicare cards is an appropriate step toward reducing the risk of identity theft. However, the department also stated that CMS could not proceed with changes without agreement from other agencies, such as the Social Security Administration, and that funding was also a consideration. Thus, CMS has not yet taken action to address these recommendations. We are currently examining other options for updating and securing Medicare cards, including the potential use of electronic-card technologies, and expect to issue a report early next year. In conclusion, although CMS has taken some important steps to identify and prevent fraud through increased provider and supplier screening and other actions, the agency must continue to improve its efforts to reduce fraud, waste, and abuse in the Medicare program. Identifying the nature, extent, and underlying causes of improper payments, and developing adequate corrective action processes to address vulnerabilities, are essential prerequisites to reducing them. As CMS continues its implementation of PPACA and Small Business Jobs Act provisions, additional evaluation and oversight will help determine whether implementation of these provisions has been effective in reducing improper payments. We are investing resources in a body of work that assesses CMS’s efforts to refine and improve its fraud detection and prevention abilities. Notably, we are currently assessing potential use of electronic-card technologies, which can help reduce Medicare fraud. We are also examining the extent to which CMS’s information system can help prevent and detect the continued enrollment of ineligible or potentially fraudulent providers and suppliers in Medicare. Additionally, we have a study under way examining CMS’s oversight of fraud, waste, and abuse in Medicare Part D to determine whether the agency has adopted certain practices for ensuring the integrity of that program. We are also examining CMS’s oversight of some of the contractors that conduct reviews of claims after payment. These studies are focused on additional actions for CMS that could help the agency more systematically reduce potential fraud in the Medicare program. Chairman Murphy, Ranking Member DeGette, and Members of the Subcommittee, this concludes my prepared remarks. I would be pleased to respond to any questions you may have at this time. For further information about this statement, please contact Kathleen M. King at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Karen Doran, Assistant Director; Eden Savino; and Jennifer Whitworth were key contributors to this statement. Medicare: Further Action Could Improve Improper Payment Prevention and Recoupment Efforts. GAO-14-619T. Washington, D.C.: May 20, 2014. Medicare Fraud: Progress Made, but More Action Needed to Address Medicare Fraud, Waste, and Abuse, GAO-14-560T. Washington, D.C.: April 30, 2014. Medicare: Second Year Update for CMS’s Durable Medical Equipment Competitive Bidding Program Round 1 Rebid. GAO-14-156. Washington, D.C.: March 7, 2014. Medicare Program Integrity: Contractors Reported Generating Savings, but CMS Could Improve Its Oversight. GAO-14-111. Washington, D.C.: October 25, 2013. Health Care Fraud and Abuse Control Program: Indicators Provide Information on Program Accomplishments, but Assessing Program Effectiveness Is Difficult. GAO-13-746. Washington, D.C.: September 30, 2013. Medicare Information Technology: Centers for Medicare and Medicaid Services Needs to Pursue a Solution for Removing Social Security Numbers from Cards. GAO-13-761. Washington, D.C.: September 10, 2013 Medicare Program Integrity: Few Payments in 2011 Exceeded Limits under One Kind of Prepayment Control, but Reassessing Limits Could Be Helpful. GAO-13-430. Washington, D.C.: May 9, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Medicare Program Integrity: Greater Prepayment Control Efforts Could Increase Savings and Better Ensure Proper Payment. GAO-13-102. Washington, D.C.: November 13, 2012. Medicare Fraud Prevention: CMS Has Implemented a Predictive Analytics System, but Needs to Define Measures to Determine Its Effectiveness. GAO-13-104. Washington, D.C.: October 15, 2012. Health Care Fraud: Types of Providers Involved in Medicare, Medicaid, and the Children’s Health Insurance Program Cases. GAO-12-820. Washington, D.C.: September 7, 2012. Medicare: CMS Needs an Approach and a Reliable Cost Estimate for Removing Social Security Numbers from Medicare Cards. GAO-12-831. Washington, D.C.: August 1, 2012. Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs. GAO-12-803T. Washington, D.C.: June 7, 2012. Medicare: Review of the First Year of CMS’s Durable Medical Equipment Competitive Bidding Program’s Round 1 Rebid. GAO-12-693. Washington, D.C.: May 9, 2012. Medicare Program Integrity: CMS Continues Efforts to Strengthen the Screening of Providers and Suppliers. GAO-12-351. Washington, D.C.: April 10, 2012. Medicare Part D: Instances of Questionable Access to Prescription Drugs. GAO-11-699. Washington, D.C.: September 6, 2011. Medicare Integrity Program: CMS Used Increased Funding for New Activities but Could Improve Measurement of Program Effectiveness. GAO-11-592. Washington, D.C.: July 29, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Medicare Fraud, Waste, and Abuse: Challenges and Strategies for Preventing Improper Payments. GAO-10-844T. Washington, D.C.: June 15, 2010. Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight. GAO-10-143. Washington, D.C.: March 31, 2010. Medicare: Thousands of Medicare Providers Abuse the Federal Tax System. GAO-08-618. Washington, D.C.: June 13, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO has designated Medicare as a high-risk program, in part because the program's size and complexity make it vulnerable to fraud, waste, and abuse. In 2013, Medicare financed health care services for approximately 51 million individuals at a cost of about $604 billion. The deceptive nature of fraud makes its extent in the Medicare program difficult to measure in a reliable way, but it is clear that fraud contributes to Medicare's fiscal problems. More broadly, in fiscal year 2013, CMS estimated that improper payments—some of which may be fraudulent—were almost $50 billion. This statement focuses on the progress made and important steps to be taken by CMS and its program integrity contractors to reduce fraud in Medicare. This statement is based on relevant GAO products and recommendations issued from 2004 through 2014 using a variety of methodologies. Additionally, in June 2014, GAO updated information based on new regulations regarding enrollment of certain providers in Medicare by examining public documents. The Centers for Medicare & Medicaid Services (CMS)—the agency within the Department of Health and Human Services (HHS) that oversees Medicare—has made progress in implementing several key strategies GAO identified or recommended in prior work as helpful in protecting Medicare from fraud; however, implementing other important actions that GAO recommended could help CMS and its program integrity contractors combat fraud. These strategies are: Provider and Supplier Enrollment : The Patient Protection and Affordable Care Act (PPACA) authorized, and CMS has implemented, actions to strengthen provider and supplier enrollment that address past weaknesses identified by GAO and HHS's Office of Inspector General. For example, CMS has hired contractors to determine whether providers and suppliers have valid licenses and are at legitimate locations. CMS could further strengthen enrollment screening by issuing a rule to require additional provider and supplier disclosures of information, such as any suspension of payments from a federal health care program, and establishing core elements for provider and supplier compliance programs, as authorized by PPACA. Prepayment and Postpayment Claims Review : Medicare uses prepayment review to deny claims that should not be paid and postpayment review to recover improperly paid claims. GAO has found that increased use of prepayment edits could help prevent improper Medicare payments. For example, prior GAO work identified millions of dollars of payments that appeared to be inconsistent with selected coverage and payment policies and therefore improper. Postpayment reviews are also critical to identifying and recouping overpayments. GAO recommended better oversight of both (1) the information systems analysts use to identify claims for postpayment review, in a 2011 report, and (2) the contractors responsible for these reviews, in a 2013 report. CMS has taken action or has actions under way to address these recommendations. Addressing Identified Vulnerabilities : Having mechanisms in place to resolve vulnerabilities that could lead to improper payments is critical to effective program management and could help address fraud. However, prior GAO work has shown weaknesses in CMS's processes to address such vulnerabilities. For example, GAO has made multiple recommendations to CMS to remove Social Security numbers from beneficiaries' Medicare cards to help prevent identity theft. HHS agreed with these recommendations, but reported that CMS could not proceed with the changes for a variety of reasons, including funding limitations, and therefore has not taken action. GAO work under way addressing these key strategies includes examining: (1) how well CMS's information system can prevent and detect the continued enrollment of ineligible or potentially fraudulent providers and suppliers in Medicare, (2) the potential use of electronic-card technologies to help reduce Medicare fraud, (3) CMS's oversight of program integrity efforts for prescription drugs, and (4) CMS's oversight of some of the contractors that conduct reviews of claims after payment. These studies could help CMS more systematically reduce potential fraud in the Medicare program. |
Technological advances continue to transform the U.S. workforce, and workers must improve their skills to meet employers’ changing needs. Many employers report difficulties in finding qualified workers, and many unemployed workers lack the skills they need to find jobs. Training programs can help workers gain the skills needed for today’s jobs, and employment placement programs can help employers find qualified employees. In 2002, the federal government funded 44 employment and training programs that provided services, such as job search assistance, employment counseling, basic adult literacy, and vocational training, to over 30 million people at a cost of approximately $12 billion. Although these programs were administered by nine federal agencies, many of the programs provided services to the public through one-stop centers in communities throughout the country. When the Congress passed the Workforce Investment Act (WIA) in 1998, it mandated that at least 17 federally funded programs provide employment and training services through a one-stop center system (see table 1). WIA also established workforce investment boards. Each state workforce investment board is responsible for developing statewide workforce policies and overseeing its local workforce investment boards. The local workforce investment boards, in turn, are responsible for developing local workforce policies and overseeing one-stop center operations (see fig. 1). Some of the federal employment and training programs are not required to provide services through the one-stop centers. These include the Temporary Assistance for Needy Families program (TANF) and the H-1B Technical Skills Training Grant Program. The TANF program is administered by the Department of Health and Human Services and assists needy adults with children in finding and retaining employment. The H-1B Technical Skills Training Grants are administered by the Department of Labor, and the funds are distributed to select local workforce investment boards to increase the supply of skilled workers in occupations identified as needing more workers. In addition to federally funded programs, states use their own revenues to expand employment placement and training opportunities. For example, states create unemployment insurance (UI) tax offsets by decreasing the UI tax amount paid by employers and at the same time imposing a separate tax on employers for the same amount as the UI tax deduction. In addition, states use other employer taxes, and revenues from each states’ UI interest fund or from UI penalty fees imposed on employers. Employers may be charged UI penalty fees for late payments, for failing to file a UI return for an employee, or for failing to report an employee’s wages. While all of these revenues are generated through employer taxes, states also commit general revenue funds to expand employment placement and training opportunities. A study for the National Governors’ Association Center for Best Practices found that state-funded worker training programs are operating in 48 states. States have increased the availability of employment placement and training opportunities in various ways. Some states have used their revenues to expand federally funded programs. In fact, a recent national study by National Association of State Workforce Agencies found that 19 states used these revenues to supplement WIA job training services. Other states have used their revenues, including employer tax funds, to create their own employment placement and training programs; however, little is known about these programs. Some employers invest their own resources in training their workers. The exact amount of money that employers spend every year to train their workers is difficult to estimate; a study of trends in employer-provided training suggests that employers’ financial commitment to training has recently increased. Some individuals, as well, invest their own funds for training as a way to either upgrade their job-related skills or to become employable. Impact evaluations for public programs, like employment and training programs, produce findings that allow conclusions about the effectiveness of the programs to be made. These evaluations may be implemented using a few different design strategies. Two designs that are used to isolate a program’s effects, such as those on participants, are experimental designs and quasi-experimental designs. Experimental designs. These are characterized by the use of random selection and control groups. All individuals have an equal chance of being assigned to either the intervention group or the control group. The intervention group contains individuals who will receive the intervention, or program’s services, while the control group does not receive the intervention or services. This research design produces findings that allow conclusions about the effectiveness, or impact, of the intervention to be made. However, conducting experimental designs may be problematic because of the need to treat intervention and control groups differently. For example, to determine the impact of a training program on workers’ wages, a program would need to randomly provide services to some and randomly deny services to others, and track subsequent earnings for both groups of people. This approach requires services to be denied to some workers who qualify for training. Due to these difficulties, as well as the amount of time and money it takes to conduct experimental designs, quasi- experimental research designs are often preferable for their practicality. Quasi-experimental designs. These designs are characterized by comparison groups that are not randomly selected. For training programs, a quasi-experimental design would compare a group of people who have elected to take the training courses with nonparticipants who may have characteristics, such as wage or education levels, that are comparable to the group receiving services. Comparing the two groups allows researchers to account for other factors, such as the local economy, that may have influenced outcomes. The Department of Labor’s Employment and Training Administration (ETA) Office of Policy Development, Evaluation and Research has valuable resources related to designing and implementing evaluations. Labor has established evaluation coordination liaisons in each state to help with evaluations of federal programs. These liaisons can help states access logistical support and technical assistance for program evaluations. Such resources include ETA’s recent review of alternative research methodologies, which contains guidance on conducting experimental and quasi-experimental evaluations of workforce programs to determine the social and economic values of the programs. Twenty-three states reported using employer tax revenues in 2002 from a variety of employer taxes to fund their own employment placement and training programs. States most often provided job-specific training for workers. States reported spending a total of $278 million to provide these training and employment placement services. Some states established their programs as a way to address a variety of specific workforce and economic issues, such as chronic shortages of skilled workers. Twenty-three states reported using a variety of employer taxes in 2002 to fund employment placement and training services to address specific workforce issues (see fig 2). These states reported spending a total of $278 million on their workforce programs. Expenditures in 2002 varied dramatically from state to state, ranging from $100,000 in Kansas to over $84 million in California (see fig. 3). In 18 of the states, employer tax revenues completely funded these employment and training programs, while in 3 of the states employer tax revenues made up at least 50 percent of the funding for these programs. Only 1 state reported that employer tax dollars constituted less than 50 percent of its program’s funds. (For more information on individual state employment placement and training program budgets in 2002, see app. II.) States used various types of employer taxes to fund employment placement and training services (see table 2). Eleven states reported using a UI tax offset. Eight states funded their programs through a separate state employer tax. For example, Delaware employers were taxed $12.75 for the first $8,500 of each employee’s annual salary. Similarly, Massachusetts’s employers were taxed up to $8.10 per employee annually. Five states used UI penalty and interest funds. One state, California, reported combining funds from more than one employer tax source and funded its program through revenues generated by a UI tax offset and a separate state employer tax of up to $7 per employee. (For more information on the total funds collected by states through these employer taxes in 2002, see app. II.) California was the first state to use employer taxes for employment placement and worker training in 1982 and other states have followed suit (see fig. 4). In addition to California, 6 other states started using employer taxes to fund employment placement and training services by the end of the 1980s. New Hampshire most recently started to use these tax revenues to fund its program in 2001. Texas is the only state in our survey of programs operating in 2002 that has since terminated its worker training program. Some states established their programs as a way to address a variety of specific workforce and economic issues, such as chronic shortages of skilled workers. For example, Louisiana used $1.3 million to create an emergency medical services training program at a local community college after one of the state’s largest providers of paramedics and emergency medical care staff reported needing to hire most of its staff from out of state due to a lack of qualified workers. Similarly, to increase the supply of elder care providers, California funded training to certified nurses’ assistants so that they could become vocational nurses. In addition, other states noted that their employment placement and training programs address service and eligibility gaps in federally funded workforce programs. For example, Rhode Island officials said that because federal funds could not be used to provide training to employed workers prior to the passage of WIA, their employer tax-funded program provided employers with training funds specifically to improve employed worker skills. New Jersey and Washington officials also noted that their states used employer tax funds to provide employment placement and training services that are not offered through federally funded workforce programs. Other states, such as Louisiana, used employer taxes to fund training services for individuals who do not meet the income eligibility requirements used in WIA programs. Most states focused on certain industries, particularly manufacturing, because of their overall benefit to the state’s economy. California’s worker training program specifically targets manufacturing industries because these industries tend to offer high-paying, stable employment. Other industries that were also frequently targeted for training include: information; health care or social assistance; professional, scientific, or technical; and construction. Our earlier study examining how states and local areas are training employed workers found similar results: manufacturing along with health care and social assistance are two of the most commonly targeted economic sectors for training workers. Our survey of employer tax-funded state programs also showed that industries that were least often targeted included wholesale and retail trade, finance and insurance, and accommodation and food service (see fig. 5). States also targeted their services to certain employers as part of their workforce and economic development strategies. Over 11,000 employers were provided training services, and most states provided services for employers with 100 or fewer employees (see fig. 6). Rhode Island, for example, offered employers with 100 or fewer employees training grants of up to $10,000. Rhode Island officials said that they targeted smaller employers because these employers often do not have the resources to provide their workers with training and that smaller employers make up the majority of the companies in the state. States provided services in a variety of ways. States reported providing worker training either directly or through grants awarded to employers or training providers. For example, Louisiana generally awarded grants in amounts that covered an employer’s entire training costs. Employers could use these funds to provide training themselves, hire private training contractors, or contract with public training providers. Funded training could occur either during normal working hours or off the clock. Louisiana officials noted that they encouraged employers to use public training providers, most often the state’s technical colleges. On the other hand, California required employers to contribute to training-related costs. Employers were expected to match up to 100 percent of the training grant to pay for related expenses, such as worker wages during training or training materials. Officials from California reported that most training grants are awarded contingent upon workers being trained on the job, as opposed to off the clock. States funding employment placement services, such as interview technique and resume writing workshops, provided services directly or through other service providers. States most often reported that worker training was the primary emphasis of their employer tax-funded programs and spent more on worker training services than on employment placement services (see fig. 7). Fourteen states reported that worker training was the primary emphasis of their programs, and 10 of these states funded worker training exclusively. States spent approximately $202 million on worker training services; this represents 72 percent of the total funds spent on employment placement and training services (see fig. 8). States used these funds to provide a variety of training services. For example, in Louisiana funds were used to provide training related to automobile services and repairs, welding, painting, and sandblasting. Funds were also used in Louisiana to purchase training equipment, such as a Bridge Resource Management Simulator, which was used for river navigation training. States reported providing training services to about 200,000 people and were more likely to focus on the provision of training services to employed workers as opposed to dislocated workers or those receiving UI benefits (see fig. 9). (For a detailed review of states’ primary service focus, expenditures by service area, and the number of individuals served in 2002, see app. III.) States were most likely to provide job-specific training—such as on new production methods and computer software—and 17 states reported funding these types of services with employer tax revenues (see fig. 10). Officials from Louisiana said that they focus on job-specific training because this type of training contributes to increased worker productivity and company growth. State officials also noted that fostering company growth creates new jobs that can lower state unemployment rates. States were less likely to use employer taxes to provide nonjob-specific training, including conflict resolution, team building, or how to dress appropriately for the workplace. Twelve of the 23 states reported providing this type of training. These findings echo our previous study on worker training that found similar trends: states were more likely to focus state and federal funds on occupational training as compared to nonjob- specific training. Basic skills training—such as math, GED preparation, and English as a second language—is least often provided, with only 10 states reporting they used employer tax revenues to fund this type of training. Fewer state employer tax-funded programs emphasized employment placement services, such as career counseling, skill assessments, and self- access employment services like Internet job listings and career planning videos. Eight states reported that employment placement was their primary focus, and 6 of these states funded employment placement services exclusively. States reported spending approximately $77 million to provide employment placement services to approximately 1.17 million individuals. Despite the fact that fewer states reported emphasizing employment placement services, the total number of individuals receiving employment placement services is approximately six times as great as the total number of individuals receiving training services. The difference in the number of people served may be attributed to the time and resource intensity of training services compared with employment placement services. For example, Louisiana awards training grants that are up to 2 years in length. In comparison with training services, many of the employment placement services that states reported providing are far less time- and resource-intensive. Twenty-one of the 23 states with employment placement and training programs funded through employer taxes reported some coordination with federal workforce programs in 2002. The most common coordination activity reported by states was the joint promotion of state and federally funded workforce programs through outreach or referrals (see fig. 11). These promotion activities occurred in various ways. For example, in California, a local workforce investment board and its one-stop center hired staff to make cold calls to companies advertising the benefits of participating in the state-funded training program. In Louisiana, on the other hand, state officials provided information packets to employers about how to upgrade their employees’ skills or fill job openings using state and federally funded workforce programs. In addition, many states reported that they coordinated with federal workforce programs by sharing technical assistance and administrative resources. Technical assistance involves the exchange of program information to improve program practices. For example, in California, staff from both state and federally funded workforce programs worked together on a task force and provided each other with technical assistance to improve services to small businesses. Sharing administrative resources, on the other hand, can involve activities such as using a common management information system, or sharing office space or staff. In Rhode Island, for example, staff at the local workforce investment boards were responsible for administering some of the training grants funded by the state program. Fewer states reported co-funding employment and training services or jointly developing policies with federal workforce programs. The number of partnerships between employer tax-funded programs and the federal workforce system varied from state to state. Some state programs coordinated with only one federal partner. For example, New Hampshire’s program chose to coordinate exclusively with its state workforce investment board. Other state programs coordinated with many federal partners. For example, Delaware’s program coordinated with a one-stop center, TANF, the H1-B technical skill grants program, and other federal workforce programs. (For additional information on each state’s partnerships with federal programs, see app. IV.) Although state employer tax-funded programs vary in their relationships with federal workforce programs, some patterns are evident regarding the most common federal partners. The majority of the states (19) reported coordinating with at least one one-stop center during 2002. However, several one-stop centers can operate in a state, and we do not know if states coordinated with more than one of these centers. Thus, it is difficult to gauge the degree of coordination between state-funded programs and one-stop centers within each state. Nevertheless, we do know that many states also reported coordinating with state workforce investment boards, of which there is only one per state (see fig. 12). The number of federal partners that state employer-funded programs have does not seem to be closely associated with the number of years that the state programs have operated. Although Delaware’s program is older than New Hampshire’s and coordinated with more federal workforce programs, this is not a consistent pattern across the country. For example, Kansas reported fewer federal partners than Louisiana, despite the fact that Kansas’s employer tax-funded program has been in existence for about a decade longer. As a result of their various partnerships with workforce investment boards and one-stop centers, almost all states reported an increase in awareness of their employer tax-funded programs. In addition, some state officials noted that coordination had improved service quality and availability. For example, officials from Michigan and New Jersey’s state programs, as well as an official from an Oregon workforce investment board, noted that co- locating staff from the state-funded programs at the one-stop centers improved the services delivered to individuals. By co-locating these programs, state officials said that they can help these individuals learn about a broader range of employment and training services and job opportunities. The Oregon official also pointed out that such co-location can reduce transportation and child care barriers for clients. Coordination can also assist states in improving services to employers. For example, a state official from Idaho reported that having staff members who are knowledgeable about both the state-funded program and WIA programs enables them to better meet the needs of employers looking to expand their businesses or move to the state. Although many state officials noted that coordination had improved services, they were less likely to report increases in funding for employment and training services as a result of these collaborative relationships (see fig. 13). Twenty-two of the 23 states with employer-funded employment placement and training programs reported assessing the performance of their programs in 2002, though program impacts could not be determined. States reported using a range of approaches to assess their employment placement and training programs, including variations in who conducted the assessments, data collection methods used for the assessments, and the frequency of the assessments. Of the 18 states that could provide assessments of their individual employment placement and training programs, 4 assessed their programs exclusively using process-oriented indicators, while the other 14 used outcome-oriented indicators in their assessments. However, none of the states used sufficiently rigorous research designs to allow them to make conclusive statements about the impact of their programs. Twenty-two of the 23 states with employer-funded employment placement and training programs reported assessing the performance of their programs in 2002. States reported using a variety of data collection methods for their assessments, and most states used a combination of data sources for their assessments. For example, Tennessee’s assessment was based on data collected from site visits to training locations and surveys administered to employers, while self-reported feedback and a fiscal audit were the data sources used for Texas’s assessment. The most commonly used data sources were: surveys, self-reported feedback, and on-site visits. Only 2 states relied solely on quantitative data, such as program expenditures and employment statistics. For example, Alabama used its UI wage database to track how program participants fared in finding jobs. Most states used a combination of internal and external evaluators for their assessments (see fig. 14). For example, California used both in- house program staff and external evaluators from several state universities to evaluate its program. On the other hand, 9 states used in-house evaluators exclusively, while only 1 state, Indiana, used external evaluators exclusively. Furthermore, states conducted their assessments at varying intervals. About two-thirds of the states (14) regularly conducted assessments— annually, quarterly, and monthly. Eight states conducted assessments once training contracts were completed. For example, Tennessee sent surveys to employers once the contracts it awarded were completed. None of the state assessments used sufficiently rigorous research designs to allow them to make conclusive statements about the impact of their programs. We asked states to provide us with copies of recent assessments of their programs. Although 5 states could not provide us with assessments of their individual employment and training programs, 18 of the 23 states shared recent assessments with us. On the basis of the 28 assessments received from 18 states, we examined indicators used by the states and found that 4 assessed their programs exclusively using process-oriented indicators. For example, Hawaii and New Hampshire collected data on the number of businesses served. Likewise, Alabama and Texas both collected data on how many people participated in their programs. Process-oriented indicators help assess a number of factors, including who uses the program, how funds are spent, and how well a program is being implemented. Fourteen states included outcome-oriented indicators along with process- oriented indicators in their assessments, with 11 states measuring worker wages (see table 3). States also used a variety of other outcome-oriented indicators, including job placement and retention rates of trainees. Outcome-oriented indicators provide important data for states related to changes, such as those in: worker wages, employment stability, and advancement rates. Although 14 states used outcome-oriented indicators, none used sufficiently rigorous research designs to allow them to make conclusive statements about the impact of their programs. Twelve of the 14 states that used outcome-oriented indicators did not use comparison groups in their evaluation design. Without comparing a program’s participants to similar nonparticipants, it is not possible to account for other factors, such as an upturn in the local economy, which may have influenced participant outcomes. While 2 states used comparison groups, their methodological design did not allow for the identification of conclusive impacts of these programs because their comparison groups were not comparable enough to their participant groups. To help close the gap between employer needs and employee skills, both federal- and state-funded workforce programs are providing skills training to employees and helping employers find qualified employees. Twenty- three states used employer taxes in 2002 to fund their own employment placement and training programs. These state programs have the potential to enhance the federal workforce system by filling service and eligibility gaps. However, the impact of these programs is unknown because states have not adequately studied them. Because these programs contribute to our nation’s ability to provide comprehensive workforce development services to meet employers’ needs for skilled workers, it would be helpful to have information on the impact of these efforts. The Department of Labor’s Employment and Training Administration (ETA) Office of Policy Development, Evaluation and Research has valuable resources related to designing and implementing evaluations that might help address this lack of information. Labor has established evaluation coordination liaisons in each state and, although this position was designed to help with evaluations for federal programs, the liaison may be able to direct state program administrators to resources such as ETA’s recent review of alternative research methodologies. Furthermore, this liaison could help state administrators access other program evaluation expertise, such as logistical support and technical assistance. We provided a draft of this report to the Department of Labor for its review, and Labor provided technical comments. Labor expressed an interest in state employment placement and training programs funded by employer taxes. In addition, Labor acknowledged the importance of collaboration between these state-funded programs and federally funded programs, by noting that it may seek opportunities to better assist states in coordinating their programs with federal Workforce Investment Act programs. We will send copies of this report to the Secretary of Labor, relevant congressional committees, and other interested parties. Copies will be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staffs have any questions about this report. Other major contributors to this report are listed in appendix VII. We were asked to determine (1) How many states use employer taxes to fund their own employment placement and training programs, and what type of services do they provide; (2) The extent to which these state employment placement and training programs are coordinating with federal workforce programs; and (3) How states are assessing the performance of their employment placement and training programs. To address these questions, we conducted three surveys, reviewed program evaluations, and visited 3 states. First, we surveyed all 50 states and the District of Columbia and Puerto Rico to identify those that were using employer tax revenues to provide their own employment placement or training programs in 2002. We then conducted a follow-up survey with the 23 states that reported using employer taxes to fund their own programs during state fiscal year 2002. Specifically, we surveyed the state programs that reported receiving the largest portion of employer tax revenues collected in their state to provide employment placement and training services. To gain a perspective on service coordination with federally funded workforce programs, we surveyed staff from workforce investment boards in 6 states that began to fund their employment placement and training programs through employer taxes in the 1980s. We also requested recent assessments from the 23 states we surveyed and reviewed the assessments from the 18 states that could provide them to us. Finally, we conducted site visits to 3 states—California, Louisiana, and Rhode Island. To determine how many states used employer taxes to fund their own employment placement and training programs, we surveyed workforce officials from the 50 states, the District of Columbia, and Puerto Rico. This structured survey was administered via e-mail and the telephone and had a 100 percent response rate. Twenty-three states reported that they used employer tax revenues to fund their own employment placement and training programs in state fiscal year 2002. To determine the types of employment placement and training services states offered, we conducted a second survey of the 23 states that reported using employer taxes to fund these services in our first survey. This survey was designed to obtain information related to program mission, services provided, populations served (individuals and industries), budget size, and expenditures. To determine if states assessed their programs, we also asked questions related to the frequency of program performance assessments and the types of methods used to measure program performance. In addition, we requested copies of recent program assessment reports. To determine the extent to which state programs coordinated with federal workforce programs, we also asked states to report how their employment placement and training programs worked with federal organizations and programs, including workforce investment boards, one-stop centers, TANF, Welfare-to-Work, H1-B grants, employment placement and training programs administered by the U.S. Department of Education, and other federally funded programs. In addition, we asked states how these coordination efforts affected program awareness, quality of service, available funding, and the amount of employment placement and training services available. To gain the perspective of officials from federally funded programs on coordination with these state programs, we administered a structured telephone survey to representatives from workforce investment boards operating in 6 states that began their employer tax-funded employment placement or training programs during the 1980s (see table 4). We chose these states with older programs, because we believed that they would have more established partnerships with federal programs and would be able to provide in-depth information on coordination. We surveyed representatives from 5 of the state workforce investment boards. We also surveyed a total of 10 purposively and randomly selected local workforce investment boards. At least one local workforce investment board was surveyed from each state that began operating its employer tax-funded program during the 1980s. We included steps in both the survey data collection and data analysis stages to account for and minimize the variability that occurs when respondents interpret questions differently or have different information available to them. For example, survey specialists along with subject matter specialists designed each questionnaire, and we pre-tested each questionnaire with the appropriate target audience to ensure that questions were clear. We pre-tested our workforce investment board survey with representatives from state workforce investment boards and a local workforce investment board. We also reviewed survey questionnaire responses for consistency and in several cases contacted respondents to resolve inconsistencies. However, we did not otherwise verify the information provided in the responses. In order to increase our response rate for each survey, we followed up with program officials through e-mail and telephone contact. We analyzed these survey data by calculating descriptive statistics. We reviewed recent assessments from the 18 states that could provide them to us. Two of those states shared more than one recent assessment with us, all of which we used in our analysis. The assessments we collected ranged from annual reports to budget briefings to strategic plans to external evaluations. We analyzed these reports by performing a content analysis in which we coded the assessment indicators as outputs (process-oriented data) or outcomes (outcome-oriented data). Furthermore, when provided, we analyzed the research designs states used to assess their programs against standard evaluation research design characteristics as described by Rossi and Freeman (1993) and McBurney (1994). We selected 3 states for site visits according to several criteria, including the year employer taxes were first used to fund their employment placement and training program. We chose states that were early, mid- and late implementers. Site selection was also based on diverse program funding levels and geographic diversity (see table 5). In each state, we interviewed officials responsible for administering each state’s employer tax-funded employment placement or training program to gain further insight into the types of services provided and populations served by these programs. To learn more about the extent to which these state-funded employment placement and training programs coordinate with federally funded workforce programs, we also interviewed officials from each state’s workforce investment board. We also interviewed officials from two one-stop career centers operating in each state we visited. We purposively selected these one-stop career centers because they coordinated with employer-funded state programs. Program solely funded through employer taxes Yes These states’ program budgets for state fiscal year 2002 were greater than the amount collected through each state’s employer tax. Reasons for this disparity varied and included rollovers of unspent funds from previous years. Some states, specifically Delaware, Indiana, Michigan, Oregon, and South Dakota, also used other funding sources in addition to employer tax revenues to pay for these programs. In Indiana, Michigan, and South Dakota at least 50 percent of the funding for these programs came from employer taxes. However, in Oregon employer taxes constituted less than half of the funds used for the program. Delaware did not specify the portion of its program budget funded by employer taxes. Our survey permitted states to report “DK” or “Don’t Know.” Applicable data not provided. Delaware, Indiana, Michigan, and South Dakota reported that their program budgets included funds from other sources, making it difficult to isolate expenditures from their state employer tax revenues. While Oregon also reported that its program budget included funds from other sources, Oregon provided us with additional data. Oregon’s expenditures included in this figure are those that were solely funded through employer tax revenues. Texas was unable to provide us with the number of individuals that received training services in 2002. Idaho reported its program emphasis as “other.” Indiana and Michigan reported expenditures that exceeded their program budgets. Board(s) In addition, 22 states responded to the survey questions regarding coordination with one-stop centers and the H1-B program, while 20 states responded to the question regarding coordination with local workforce investment boards. N/A signifies not applicable and is listed for Del, N.H., S.Dak., and Wyo. These are states that have a single workforce investment board, which functions as both the state and local board. The Welfare-to-Work program is a mandated partner of the one-stop centers. While all states that reported coordinating with the Welfare-to-Work program also reported coordinating with a one-stop center, not all states that reported coordinating with a one-stop center also reported coordinating with the Welfare-to-Work program. States had the option to list multiple programs under both the “Department of Education Employment and Training” category and the “Other Federal Employment and Training” category. For the “Department of Education” category, states noted programs such as Adult Education and Literacy, and Vocational Education. For the “Other Federal Employment and Training” category, programs ranged from Veterans’ Employment and Training Service to Job Corps. Both the “Department of Education” category and the “Other Federal Employment and Training” category included some programs that are mandated one-stop partners. Denotes that a state did not respond to this question. In-house program staff and contract recipient staff In-house program staff and contract recipient staff In-house program staff and contract recipient staff In-house program staff, external evaluators, and contract recipient staff In-house program staff and contract recipient staff Michigan’s performance assessments were conducted against agreed upon goals and objectives for each of the program’s local areas. New Jersey is the only state that reported it did not regularly assess its program in 2002. Irene J. Barnett and Holly C. Ciampi made significant contributions to this report, in all aspects of the work throughout the assignment. In addition, Debra Waterstone and Shirley Hwang contributed to the administration of our survey of state programs and Kevin Murphy assisted in the initial planning of the assignment. Avrum Ashery, Michele Fejfar, Alison Martin, Corinna Nicolaou, Audrey Ruge, Daniel Schwimer, and Shana Wallace provided key technical assistance. | As technological and other advances transform the U.S. economy, many of the nation's six million employers may have trouble finding employees with the skills to do their jobs well. Some experts indicate that such a skill gap already affects many employers. To help close this skill gap, both federal- and state-funded programs are providing training and helping employers find qualified employees. In 2002, the federal government spent about $12 billion on workforce programs, and there are various studies on these programs. States also raised revenues in 2002--from taxes levied on employers--to fund their own workforce programs. However, little is known about these state programs. GAO was asked to provide information on how many states use these employer taxes to fund their own employment placement and training programs, what services are provided, the extent to which these state programs coordinate with federal programs, and how states assess the performance of these programs. Twenty-three states reported using employer tax revenues in 2002 to fund their own employment placement and training programs, and states most often provided job-specific training for workers. States used various types of employer taxes and reported spending a total of $278 million to address state-specific workforce issues. States invested in a variety of industries, but manufacturing was the most frequently targeted. Most states with employment placement and training programs funded through employer taxes reported some coordination with federal workforce programs in 2002. States were most likely to coordinate with federal workforce programs by jointly promoting programs through outreach and referrals. According to most state officials, coordination with federal workforce programs raised awareness of their state-funded programs. Some state officials also reported that coordination improved the quality and availability of services. Twenty-two of the 23 states reported assessing the performance of their programs in 2002. However, none have used sufficiently rigorous research designs to allow them to make conclusive statements about the impact of their programs, such as their effect on worker wages or company earnings. Because these programs contribute to our nation's ability to provide comprehensive workforce development services to meet employers' needs for skilled workers, it would be helpful to have information on the impact of these efforts. The Department of Labor has valuable resources that might help states evaluate the impact of their programs. |
The federal adoption tax credit was first authorized in the Small Business and Job Protection Act of 1996, which provided for a nonrefundable credit for adoption expenses, not to exceed $5,000, or $6,000 for children with special needs. Special needs children are defined as those children who a state determined cannot or should not be returned to a parent, and using specified criteria, the state can reasonably assume that the child will not be adopted without state assistance. Parents of adoptive children with special needs are also eligible for direct assistance under Title IV-E of the Social Security Act. Although the federal Department of Health and Human Services (HHS) oversees state administration of the payments for direct adoption assistance, the state agencies designate which children are considered to have special needs. State adoption agency managers provide guidance to adoptive families on how to manage the adoption process and frequently receive inquiries about documentation and other administrative requirements. Documentation certifying adoptions varies from state to state. In its oversight role for state adoption programs, HHS provides information to states through guidance and technical assistance. It also provides information to states and families on adoption-related issues through websites. When the credit was first enacted in 1996, families that had qualifying expenses greater than the maximum limit for the credit could carry over that amount and claim those expenses for up to 5 years. Also, the law phased out the credit for taxpayers above an upper income limit (which was $182,520 in adjusted gross income for tax year 2010). Families adopting non–special needs children can claim only the amount of documented qualified expenses up to the maximum limit. However, since 2002, families adopting special needs children have been able to claim the maximum tax credit without having to document adoption expenses. For tax years 2010 and 2011, the Patient Protection and Affordable Care Act (PPACA) of 2010 made the adoption credit refundable and set the maximum credit at $13,170 for 2010, with the maximum amount for 2011 indexed for inflation. The credit is scheduled to revert to a nonrefundable credit with a $10,000 maximum for tax year 2012. For 2013 and beyond, the credit will be available only for special needs adoptions and may only be claimed for qualified expenses incurred up to a maximum of $6,000. See appendix I for detailed information on adoption tax credit legislation. Since the original provision was adopted in 1996, taxpayers have claimed about $4.28 billion in adoption tax credits. For tax year 2010, taxpayers filed almost 100,000 returns claiming the credit, with over $1.2 billion claimed as of August 2011. Figure 1 shows the total number of claimants and amount of claims for each year since 1998. We have previously reported that refundable tax credits have presented a challenge to IRS. Because taxpayers can claim refundable credits in excess of their tax liability, those attempting to commit fraud may file false claims in efforts to get improper payments from the Treasury. For example, IRS has had to deal with fraudulent claims and improper payments involving the Earned Income Tax Credit and First-Time Homebuyer Credit (FTHBC), both of which are refundable. In such cases, Congress and IRS have taken steps to reduce the amount of fraud and improper payments while trying to minimize the number of returns that need to be audited. Audits are reviews of taxpayers’ records to determine if they paid the correct amount of taxes. As we have also previously reported, audits are costly to IRS and can create delays in delivering refunds to taxpayers because, in some cases, IRS holds the portion of the refund being audited until the audit is complete. As an alternative to the standard audit process, Congress can approve math error authority (MEA) to allow IRS to automatically deny claims, without doing an audit, in cases where the taxpayer did not provide required documentation. In 2009, Congress approved MEA for the FTHBC, which helped significantly reduce improper payments. Having MEA allows IRS to automatically deny credit claims in instances where IRS can tell with virtual certainty that the taxpayer did not provide all of the required information and allows IRS to devote costly audit resources to other priorities. In cases where taxpayers disagree with the credit disallowance, they may request an abatement. After the changes to the adoption tax credit took effect in 2010, IRS adopted a compliance strategy to minimize improper payments and maximize accurate returns. This strategy included the following major elements: Communicating and reaching out to taxpayers, tax professionals, Congress, the states, and adoption organizations, with the objective of conveying information about changes in the law and documentation requirements. Requiring taxpayers claiming the credit to submit documentation that the adoption of the child for whom credit was being claimed was already completed or in progress (an adoption order or decree for a completed adoption, or a home study, placement agreement, hospital or court document, or lawyer’s affidavit for an adoption in progress), along with IRS Form 8839. Requiring taxpayers claiming special needs status for their child to submit documentation from their state or local adoption authority certifying that status. Requiring taxpayers claiming the credit to file on paper rather than electronically so that required documentation could be included with the return. Screening returns for proper documentation and possible audit, as shown in figure 2. IRS’s strategy included monitoring the success of its efforts in processing tax year 2010 adoption credit claims during the 2011 filing season. IRS officials met in early October 2011 to discuss lessons learned concerning the execution of its tax year 2011 adoption credit strategy and consider changes in the strategy for next year’s filing season. To inform taxpayers, paid preparers, state agencies, adoption advocacy groups, and other stakeholders about the new law and documentation requirements, IRS planned to use various means of communication, such as its website, media releases, phone forums, webinars, Twitter accounts, and YouTube recordings. IRS aimed its communications at interested parties, such as paid tax preparers and adoption advocates and agencies, through, for example, specially directed e-mails, articles in professional publications, and appearances at meetings and conferences. However, IRS missed some opportunities to communicate on matters that later became areas of concern. For example, while IRS held a webinar on the adoption tax credit for tax professionals, IRS officials reported that because of a lack of resources they canceled the single scheduled webinar for adoption agencies and organizations that may have clarified IRS’s documentation requirements for claiming the adoption credit. In addition, IRS did not make an effort to communicate to state adoption program managers or convey information about documentation requirements for claims involving special needs children, which could have helped state adoption managers better inform adoptive parents who asked them what documentation to provide to IRS. Because of this, according to officials from adoption advocacy groups and state adoption agencies, key information about the credit did not reach some taxpayers and stakeholders, especially concerning the requirements for certification of children with special needs. As a result, according to state adoption officials and adoption organization representatives who received calls from taxpayers, IRS sent notices to many adoptive families that their returns would be subject to audit and their refunds delayed. When adoption organizations contacted IRS, the agency acknowledged a problem with its communications and the clarity of its guidance and took some corrective steps, including placing additional information about the adoption credit and special needs documentation on its website. IRS plans to take additional steps, including revising the adoption credit claim form (Form 8839) and related instructions for the 2012 filing season. However, IRS has not indicated that it plans to target future communications specifically to state and local adoption officials. In addition, IRS did not adequately inform its tax examiners regarding certain aspects of the adoption tax credit. In particular, IRS did not specify in its examiner training materials what documentation it required and would accept to verify that adopted children had special needs status. While, in March 2011, IRS provided examiners with some examples of state adoption assistance agreements, which certify special needs status, it did not include any such examples in its training materials, even after the materials were revised in June. According to the state adoption officials with whom we spoke, the inadequate preparation of examiners led to difficulties getting IRS to accept adoption assistance agreements as proof of special needs status. For example, in response to audits and in order to get IRS to accept documentation, adoption assistance representatives from Wisconsin had to prepare a letter certifying special needs status and provide it to the families that were waiting on refunds. In June 2011, IRS revised its training materials and the Internal Revenue Manual to indicate that a state agreement to provide adoption assistance under Title IV-E of the Social Security Act was sufficient proof of special needs status, but did not include examples of adoption assistance agreements in the revised materials. IRS left the question of whether certification was sufficient in the absence of such an agreement up to the examiner’s judgment. According to adoption advocacy organization officials, problems persist even after the steps IRS took, with some examiners still not recognizing assistance agreements from some states as proof of special needs eligibility. Because adoption assistance agreements vary from state to state and, in some states, adoption assistance agreements are executed at the county level, adoption advocacy representatives acknowledged that IRS examiners faced challenges in identifying what documentation would be acceptable as proof of special needs status in each state. IRS took some steps to clarify what constituted sufficient documentation throughout the filing season. However, more could be done to clarify for taxpayers or its examiners what would be acceptable documentation, such as providing copies of acceptable adoption assistance agreements for each state in the revised training materials. Providing copies of state adoption assistance agreements would likely be relatively low cost, particularly since representatives from an adoption organization told us that they had provided IRS with agreements from about 40 states. Because of its role in overseeing state adoption agencies, HHS may also be able to aid IRS in reaching out to state adoption agencies. Further, if IRS were to provide examiners with examples of adoption assistance agreements for each state, it could also post such information on its website to help taxpayers and paid tax preparers understand what constitutes acceptable documentation. The incremental cost of providing such information would likely be negligible. For the 2011 filing season, IRS screeners automatically directed all returns on which taxpayers claimed the adoption tax credit and where documentation was either missing or of uncertain validity to correspondence audit (audits by mail). A senior IRS official acknowledged that this process resulted in a large number of adoption credit–related correspondence audits and diverted IRS resources from other more productive audits. As of August 6, 2011, IRS had sent 68 percent of almost 100,000 returns it had processed on which taxpayers claimed adoption credits to correspondence audit. Of those returns sent to audit, 83 percent were sent because of missing documentation or documentation IRS could not determine to be valid. IRS reported that it ended up disallowing all or a portion of the credit for only about 6,000 (17 percent) of the approximately 35,000 returns on which audits have been completed and assessed $17.7 million in additional tax. This means that for 83 percent of adoption tax credit returns audited thus far, there was no change in the tax owed or refund due. Reducing the number of adoption tax credit audits would allow IRS to do more correspondence audits of other returns where the chance of assessing additional tax would be greater. To this end, all correspondence audits conducted in 2010 resulted in additional tax being assessed 86 percent of the time, compared to 17 percent for the adoption credit in 2011. Further, IRS officials also told us that they had not found any fraudulent adoption tax credit claims, and there had been no referrals of adoption tax credit claims to its Criminal Investigation unit. Through September 10, 2011, IRS used a disproportionate share of its audit resources on the adoption credit. IRS reported spending 32,000 staff days on adoption tax credit audits during the 2011 filing season. This represents about 3.5 percent of all staff days expended on initial review and correspondence audits. By comparison, the almost 100,000 returns filed on which taxpayers claimed the adoption tax credit as of August 20, 2011, represent less than one-tenth of 1 percent of all individual returns filed up to that point. According to IRS officials, data for audits completed through September 2011 show that an adoption credit correspondence audit takes, on average, 74 calendar days. This delays refunds, which, according to adoption agency officials, can create difficulties for families expecting to cover adoption costs with the refund. According to IRS officials, there are several options, each with advantages and disadvantages, for how returns on which adoption credits are claimed could be handled in the 2012 filing season. These include alternatives that could reduce costs and refund delays for claims submitted without any documentation—41 percent of claims processed as of August 2011—by either employing MEA or by sending a notice without an audit. If IRS retains its 2011 strategy, it would risk again sending a relatively large proportion of adoption credit claims to audit that generate relatively low dollar amounts of assessed taxes. Doing so would likely ensure that all claims are properly documented, but would divert IRS resources from other priorities and continue to delay refunds to taxpayers. Alternatively, IRS could seek to obtain MEA from Congress permitting IRS to disallow the adoption tax credit without audit if a taxpayer did not supply any documentation, similar to authority granted earlier to IRS for returns on which taxpayers claimed the FTHBC. TIGTA suggested to IRS in October 2010 that it seek MEA for the adoption tax credit, and IRS and Treasury Department officials considered requesting such authority prior to the 2011 filing season. However, IRS and Treasury officials determined that current compliance tools would be sufficient. As a result, they did not request the additional authority. Finally, IRS could institute a procedure by which, immediately following initial screening of the return, it would send a letter to taxpayers who did not provide any documentation, notifying them of what documentation is needed. In this case, IRS would not disallow the credit, but would instruct the taxpayer to respond to the letter within 20 days while IRS holds the return until the taxpayer responds. If the taxpayer is able to produce adequate documentation in response to the letter, the IRS examiner initially screening the return could approve the return for processing without further audit and taxpayers would receive refunds faster than they would if their returns were audited. However, current procedure specifies that if the taxpayer is unable to produce the requested documentation, the return would be sent for audit so that IRS can resolve the issue. Thus, if a taxpayer did not send in documentation, his or her return would also be sent to audit, possibly creating a longer delay than with IRS’s current strategy, since there would be additional time spent while IRS waited for the taxpayer to send in documentation. IRS has not yet determined whether sending a letter upon initial screening would lead to a significant number of taxpayers submitting documentation after receiving the letter, thus reducing processing time and the number of audits. IRS officials told us that data from the 2011 filing season on the number of claimants who submitted documentation while undergoing a correspondence audit should help determine whether sending an initial letter after screening the return would be more effective. Table 1 summarizes the options and the potential advantages and disadvantages of each. The adoption tax credit provides a significant source of financial assistance to adoptive families. In part because of the amount of money at stake and potential for improper payments, IRS developed a strategy for reviewing claims and administering the credit and devoted significant resources to ensuring compliance. However, in implementing this strategy IRS missed opportunities to clarify important information about what documentation it deemed acceptable, increasing the burden on taxpayers legitimately seeking the credit. Confusion about the documentation combined with the process used to send returns for correspondence audits has resulted in delaying refunds to taxpayers and the use of IRS resources that could likely be better spent elsewhere. In reviewing its strategy for the 2012 filing season, IRS has an opportunity to reduce the time and resources spent on correspondence audits of adoption tax credit claims as well as the number and length of refund delays while still maintaining a robust enforcement strategy. For the 2012 filing season, we recommend that the Commissioner of Internal Revenue instruct appropriate officials to ensure that the communications effort specifically includes state and local adoption officials, and clarifies acceptable documentation for the certification of special needs adoptees; provide examples of adoption assistance agreements that meet the requirements for documenting special needs status, from each state and the District of Columbia, in training materials given to reviewers and examiners; place the agreements on its website to help taxpayers better understand what constitutes acceptable documentation; and determine whether requesting documentation in cases where no documentation is provided before initiating an audit would reduce the number of audits without significantly delaying refunds and, if so, implement such a strategy for the 2012 filing season. We provided a draft of this report to the Commissioner of Internal Revenue. In written comments on a draft of this report (which are reprinted in app. II) the IRS Deputy Commissioner for Services & Enforcement agreed with our recommendations to extend outreach to state adoption managers and to determine whether requesting documentation before initiating audits would reduce the number of audits without significantly delaying refunds. However, although he agreed that reviewers and examiners should be provided examples of adoption assistance agreements, he indicated that IRS believes current examples of state adoption assistance agreements available to examiners on an internal website are sufficient to permit them to accurately evaluate adoption records. As we stated in our report, we believe making additional examples of state adoption assistance agreements available to examiners would impose minimal incremental costs. Providing additional examples would give examiners greater certainty that taxpayers submitted the correct documentation on a state- by-state basis. Doing so would also give IRS’s examiners a more comprehensive list of acceptable documentation. In developing a more comprehensive set of examples for examiners, IRS could also list the states where documentation originates from the county or local level without collecting documentation from each jurisdiction. IRS also expressed concern that posting the agreements on IRS’s external website might enable unscrupulous individuals to submit fraudulent documentation in support of a false claim. We understand this possibility; however, any claim for a tax credit must also be accompanied by proof that an adoption has taken place or is in progress, which would not be available on the website. Given these additional documentation requirements already in place, we believe that the benefits of making state assistance agreements available to adoptive parents on the IRS website outweigh the risks. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. As shown in table 2, since 1996, an adoption credit has existed. The credit has been expanded several times since 1996 and was made refundable for tax years 2010 and 2011. However, for tax year 2012 the credit is nonrefundable with a reduced maximum and reverts to the 1996 law (nonrefundable maximum of $6,000 for special needs only) for tax year 2013 and thereafter. In addition to the contact named above, Joanna Stamatiades, Assistant Director; Steven J. Berke; Abbie David; David Fox; Tom Gilbert; Inna Livits; Kirsten Lauber; and Sabrina Streagle made key contributions to this report. | The federal adoption tax credit, established in 1996, was amended in 2010. These amendments included making the credit refundable (meaning taxpayers could receive payments in excess of their tax liability) and increasing the maximum allowable credit to $13,170 of qualified adoption expenses for tax year 2010. As of August 20, 2011, taxpayers filed just under 100,000 returns, claiming about $1.2 billion in adoption credits. Following these changes, the Internal Revenue Service (IRS) developed a strategy for processing adoption credit claims. GAO was asked to (1) describe IRS's strategy for ensuring compliance with the adoption credit for the 2011 filing season, (2) assess IRS's related communication with taxpayers and stakeholders, and (3) assess its processing and audit of claims. To conduct its analysis, GAO analyzed IRS data and documents, interviewed IRS officials, observed IRS examiners, and interviewed other stakeholders. IRS's strategy for ensuring taxpayer compliance with the adoption credit included the following: (1) Communicating and reaching out to taxpayers and other stakeholders, including tax professionals and adoption organizations, about new requirements. (2) Requiring taxpayers seeking the adoption credit to submit proof of a completed or in-progress adoption with their return. Because taxpayers claiming the credit for a special needs child (meaning that a state determined the child cannot or should not be returned to a parent, and using specified criteria, the state can reasonably assume that the child will not be adopted without state assistance) are allowed to claim the full credit without providing documentation of adoption expenses, they also needed to provide documentation certifying the special needs status of the child. (3) Requiring that returns and supporting documentation be filed on paper. (4) Automatically sending returns with missing or invalid documentation for correspondence audits (audits that IRS conducts by mail). To inform taxpayers, paid preparers and other stakeholders about new adoption credit requirements, IRS used various tools including its website, Twitter accounts, and YouTube recordings. However, IRS did not make a specific effort to communicate or convey information about documentation requirements for special needs children to state adoption managers, who administer state adoption programs. Further, IRS did not specify in training materials for its audit examiners what documentation was required to prove special needs status. IRS later revised its training materials to say that a state adoption assistance agreement (an agreement between the state and adoptive parents) was sufficient proof but did not provide samples of such agreements in the materials or place any on its website. As a consequence, taxpayers submitted a majority of returns with either no documentation or insufficient documentation. As of August 2011, 68 percent of the nearly 100,000 returns on which taxpayers claimed the adoption credit were sent to correspondence audit. However, of the approximately 35,000 returns on which audits have been completed as of August, IRS only assessed additional tax about 17 percent of the time. The equivalent rate for all correspondence audits in 2010 was 86 percent. The time it has taken IRS to audit these predominantly legitimate adoption credit claims has resulted in considerable delays in the payment of the related refunds. For the 2012 filing season, IRS has options that might allow it to reduce the number of costly audits and issue refunds faster while still maintaining a robust enforcement strategy. One option is for IRS to immediately send a letter to taxpayers who submit returns without any documentation requesting it before initiating an audit. This could potentially reduce the number of audits and delayed refunds, but IRS has not yet determined the extent of this impact. IRS officials acknowledged that data from the 2011 filing season experience should allow them to determine whether sending an initial letter requesting documentation would be more effective than initiating a correspondence audit. GAO recommends that IRS communicate with state and local adoption officials, provide examiners with examples of adoption assistance agreements, place the agreements on its website, and determine whether sending a letter before initiating an audit would reduce the need for audits. IRS generally agreed with three of GAO's recommendations, but had concerns that placing sample agreements on its website may enable fraud. However, since other proof of adoption must accompany a tax credit claim, GAO believes the benefits of making these agreements available to adoptive parents outweigh the risks. |
For 16 years, DOD’s supply chain management processes, previously identified as DOD inventory management, have been on our list of high- risk areas needing urgent attention because of long-standing systemic weaknesses that we have identified in our reports. We initiated our high- risk program in 1990 to report on government operations that we identified as being at high risk for fraud, waste, abuse, and mismanagement. The program serves to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. The department’s inventory management of supplies in support of forces was one of the initial 14 operational areas identified as high risk in 1990 because, over the previous 20 years, we had issued more than 100 reports dealing with specific aspects and problems in DOD’s inventory management. These problems included excess inventory levels, inadequate controls over items, and cost overruns. As a result of this work, we had suggested that DOD take some critical steps to correct the problems identified. Since then, our work has shown that the problems adversely affecting supply support to the warfighter—such as requirements forecasts, use of the industrial base, funding, distribution, and asset visibility—were not confined to the inventory management system, but also involved the entire supply chain. In 2005, we modified the title for this high-risk area from “DOD Inventory Management” to “DOD Supply Chain Management.” In the 2005 update, we noted that during Operation Iraqi Freedom, some of the supply chain problems included backlogs of hundreds of pallets and containers at distribution points, millions of dollars spent in late fees to lease or replace storage containers because of distribution backlogs and losses, and shortages of such items as tires and radio batteries. Removal of the high-risk designation is considered when legislative and agency actions, including those in response to our recommendations, result in significant and sustainable progress toward resolving a high-risk problem. Key determinants include a demonstrated strong commitment to and top leadership support for addressing problems, the capacity to do so, a corrective action plan that provides for substantially completing corrective measures in the near term, a program to monitor and independently validate the effectiveness of corrective measures, and demonstrated progress in implementing corrective measures. Last year, with the encouragement of OMB, DOD developed a plan for improving supply chain management that could reduce its vulnerability to fraud, waste, abuse, and mismanagement and place it on the path toward removal from our list of high-risk areas. This plan, initially released in July 2005, contains 10 initiatives proposed as solutions to address the root causes of problems DOD identified in the areas of forecasting requirements, asset visibility, and materiel distribution. By committing to improve these three key areas, DOD has focused its efforts on the areas we frequently identified as impeding effective supply chain management. For each of the initiatives, the plan contains implementation milestones that are tracked and updated monthly. Since October 2005, DOD has continued to make progress implementing the initiatives in its supply chain management improvement plan, but it will be several years before the plan can be fully implemented. Progress has been made in implementing several of the initiatives, including its Joint Regional Inventory Materiel Management, Readiness Based Sparing, and the Defense Transportation Coordination Initiative. For example: Within the last few months, through its Joint Regional Inventory Materiel Management initiative, DOD has begun to streamline the storage and distribution of defense inventory items on a regional basis, in order to eliminate duplicate materiel handling and inventory layers. Last year, DOD completed a pilot for this initiative in the San Diego region and, in January 2006, began a similar transition for inventory items in Oahu, Hawaii. Readiness Based Sparing, an inventory requirements methodology that the department expects to enable higher levels of readiness at equivalent or reduced inventory costs using commercial off-the- shelf software, began pilot programs in each service in April 2006. Finally, in May 2006, the U.S. Transportation Command held the presolicitation conference for its Defense Transportation Coordination Initiative, a long-term partnership with a transportation management services company that is expected to improve the predictability, reliability, and efficiency of DOD freight shipping within the continental United States. DOD has sought to demonstrate significant improvement in supply chain management within 2 years of the plan’s inception in 2005; however, the department may have difficulty meeting its July 2007 goal. Some of the initiatives are still being developed or piloted and have not yet reached the implementation stage, others are in the early stages of implementation, and some are not scheduled for completion until 2008 or later. For example, according to the DOD supply chain management improvement plan, the contract for the Defense Transportation Coordination Initiative is scheduled to be awarded during the first quarter of fiscal year 2007, followed by a 3-year implementation period. The War Reserve Materiel Improvements initiative, which aims to more accurately forecast war reserve requirements by using capability-based planning and incorporating lessons learned in Operation Iraqi Freedom, is not scheduled to begin implementing an improved requirements forecasting process for consumable items as a routine operation until October 2008. The Item Unique Identification initiative, which involves marking personal property items with a set of globally unique data elements to help DOD track items during their life cycles, will not be completed until December 2010 under the current schedule. While DOD has generally stayed on track, DOD has reported some slippage in meeting scheduled milestones for certain initiatives. For example, a slippage of 9 months occurred in the Commodity Management initiative because additional time was required to develop a departmentwide approach. This initiative addresses the process of developing a systematic procurement approach to the department's needs for a group of items. Additionally, the Defense Transportation Coordination Initiative experienced a slippage in holding the presolicitation conference because defining requirements took longer than anticipated. Given the long-standing nature of the problems being addressed, the complexities of the initiatives, and the involvement of multiple organizations within DOD, we would expect to see further milestone slippage in the future. In our October testimony, we also identified challenges to implementation such as maintaining long-term commitment for the initiatives and ensuring sufficient resources are obtained from the organizations involved. Although the endorsement of DOD’s plan by the Under Secretary of Defense for Acquisition, Technology, and Logistics is evidence of a strong commitment to improve DOD’s supply chain management, DOD will have to sustain this commitment as it goes forward in implementing this multiyear plan while also engaged in departmentwide business transformation efforts. Furthermore, the plan was developed at the Office of the Under Secretary of Defense level, whereas most of the people and resources needed to implement the plan are under the direction of the military services, DLA, and other organizations such as U.S. Transportation Command. Therefore, it is important for the department to obtain the necessary resource commitments from these organizations to ensure the initiatives in the plan are properly supported. While DOD has incorporated several broad performance measures in its supply chain management improvement plan, the department continues to lack outcome-focused performance measures for many of the initiatives. Therefore, it is difficult to track and demonstrate DOD’s progress toward improving its performance in the three focus areas of requirements forecasting, asset visibility, and materiel distribution. Performance measures track an agency’s progress made towards goals, provide information on which to base organizational and management decisions, and are important management tools for all levels of an agency, including the program or project level. Outcome-focused performance measures show results or outcomes related to an initiative or program in terms of its effectiveness, efficiency, impact, or all of these. To track progress towards goals, effective performance measures should have a clearly apparent or commonly accepted relationship to the intended performance, or should be reasonable predictors of desired outcomes; are not unduly influenced by factors outside a program’s control, measure multiple priorities, such as quality, timeliness, outcomes, and cost; sufficiently cover key aspects of performance; and adequately capture important distinctions between programs. Performance measures enable the agency to assess accomplishments, strike a balance among competing interests, make decisions to improve program performance, realign processes, and assign accountability. While it may take years before the results of programs become apparent, intermediate measures can be used to provide information on interim results and show progress towards intended results. In addition, when program results could be influenced by external factors, intermediate measures can be used to identify the programs’ discrete contribution to the specific result. For example, DOD could show near-term progress by adding intermediate measures for the DOD supply chain management improvement plan, such as outcome-focused performance measures for the initiatives or for the three focus areas. DOD’s supply chain management improvement plan includes four high- level performance measures that are being tracked across the department, but these measures do not necessarily reflect the performance of the initiatives or explicitly relate to the three focus areas. DOD’s supply chain materiel management regulation requires that functional supply chain metrics support at least one enterprise-level metric. In addition, while not required by the regulation, the performance measures DOD has included in the plan are not explicitly linked to the three focus areas, and it has not included overall cost metrics that might show efficiencies gained through supply chain improvement efforts. The four measures are as follows: Backorders—number of orders held in an unfilled status pending receipt of additional parts or equipment through procurement or repair. Customer wait time—number of days between the issuance of a customer order and satisfaction of that order. On-time orders—percentage of orders that are on time according to DOD’s established delivery standards. Logistics response time—number of days to fulfill an order placed on the wholesale level of supply from the date a requisition is generated until the materiel is received by the retail supply activity. The plan also identifies fiscal year 2004 metric baselines for each of the services, DLA, and DOD overall, and specifies annual performance targets for these metrics for use in measuring progress. For example, one performance target for fiscal year 2005 was to reduce backorders by 10 percent from the fiscal year 2004 level. Table 1 shows each performance measure with the associated fiscal year 2005 performance targets and actuals and whether the target was met. As table 1 shows, DOD generally did not meet its fiscal year 2005 performance targets. However, the impact to the supply chain as a result of implementing the initiatives contained in the plan will not likely be reflected in these high-level performance metrics until the initiatives are broadly implemented across the department. In addition, the high-level metrics reflect the performance of the supply chain departmentwide and are affected by other variables; therefore, it will be difficult to determine if improvements in the high-level performance metrics are due to the initiatives in the plan or other variables. For example, implementing Radio Frequency Identification—technology consisting of active or passive electronic tags that are attached to equipment and supplies being shipped from one location to another and enable shipment tracking—at a few sites at a time has only a very small impact on customer wait time. However, variables such at natural disasters, wartime surges in requirements, or disruption in the distribution process could affect that metric. DOD’s plan lacks outcome-focused performance metrics for many of the specific initiatives. We noted this deficiency in our prior testimony, and since last October, DOD has not added outcome-focused performance metrics. DOD also continues to lack cost metrics that might show efficiencies gained through supply chain improvement efforts, either at the initiative level or overall. In total, DOD’s plan continues to identify a need to develop outcome-focused performance metrics for 6 initiatives, and 9 of the 10 initiatives lack cost metrics. For example, DOD’s plan shows that it expects to have radio frequency identification technology implemented at 100 percent of its U.S. and overseas distribution centers by September 2007, but noted that it has not yet identified additional metrics that could be used to show the impact of implementation on expected outcomes, such as receiving and shipping timeliness, asset visibility, or supply consumption data. Two other examples of initiatives lacking outcome- focused performance measures are War Reserve Materiel, discussed earlier, and Joint Theater Logistics, which is an effort to improve the ability of a joint force commander to execute logistics authorities and processes within a theater of operations. Although the plan contains some performance metrics, many have not been fully defined or are intended to show the status of a project. Measures showing project status are useful and may be most appropriate for initiatives in their early stages of development, but such measures will not show the impact of initiatives on the supply chain during or after implementation. DOD officials noted that many of the initiatives in the supply chain management improvement plan are in the early stages of implementation and that they are working to develop performance measures for them. For example, an official involved with the Joint Theater Logistics initiative stated that the processes necessary for each joint capability needed to be defined before performance metrics could be developed. The recently issued contract solicitation for the Defense Transportation Coordination Initiative contains a number of performance measures, such as on-time pickup and delivery, damage-free shipments, and system availability, although these measures are not yet included in DOD’s supply chain management improvement plan. Additionally, we observed that DOD’s plan does not identify departmentwide performance measures in the focus areas of requirements forecasting, asset visibility, and materiel distribution. Therefore, it currently lacks a means to track and assess progress in these areas. Although DOD has made efforts to develop supply chain management performance measures for implementation across the department, DOD has encountered challenges in obtaining standardized, reliable data from noninteroperable systems. The four high-level performance measures in DOD’s plan were defined and developed by DOD’s supply chain metrics working group. This group includes representatives from the services, DLA, and the U.S. Transportation Command, and meets monthly under the direction of the Office of the Under Secretary of Defense. For example, the working group developed a common definition for customer wait time which was included in DOD guidance. The DOD Inspector General has a review underway to validate the accuracy of customer wait time data and expects to issue a report on its results later this summer. One of the challenges the working group faces in developing supply chain performance measures is the ability to pull standardized, reliable data from noninteroperable information systems. For example, the Army currently does not have an integrated method to determine receipt processing for Supply Support Activities, which could affect asset visibility and distribution concerns. Some of the necessary data reside in the Global Transportation Network while other data reside in the Standard Army Retail Supply System. These two databases must be manually reviewed and merged in order to obtain the information for accurate receipt processing performance measures. DOD recognizes that achieving success in supply chain management is dependent on developing interoperable systems that can share critical supply chain data. The Business Management Modernization Program, one of the initiatives in DOD’s supply chain improvement plan that has been absorbed into the Business Transformation Agency, is considered to be a critical enabler that will provide the information technology underpinning for improving supply chain management. As part of this initiative, DOD issued an overarching business enterprise architecture and an enterprise transition plan for implementing the architecture. We previously reported that Version 3.1 of the business enterprise architecture reflects steps taken by DOD to address some of the missing elements, inconsistencies, and usability issues related to legislative requirements and relevant architecture guidance, but additional steps are needed. For example, we said that the architecture does not yet include a systems standards profile to facilitate data sharing among departmentwide business systems and promote interoperability with departmentwide information technology infrastructure systems. Furthermore, we also stated that the military services’ and defense agencies’ architectures are not yet adequately aligned with the departmental architecture. DOD has multiple plans aimed at improving aspects of logistics, including supply chain management, but it is unclear how all these plans are aligned with one another. In addition to the supply chain management improvement plan, current DOD plans that address aspects of supply chain management include DOD’s Logistics Transformation Strategy, Focused Logistics Roadmap, and Enterprise Transition Plan; and DLA’s Transformation Roadmap. In December 2004, DOD issued its Logistics Transformation Strategy. The strategy was developed to reconcile three logistics concepts—force- centric logistics enterprise, sense and respond logistics, and focused logistics—into a coherent transformation strategy. The force-centric logistics enterprise is OSD’s midterm concept (2005-2010) for enhancing support to the warfighter and encompasses six initiatives, one of which includes “end-to-end distribution.” Sense and respond logistics is a future logistics concept developed by the department’s Office of Force Transformation that envisions a networked logistics system that would provide joint strategic and tactical operations with predictive, precise, and agile support. Focused logistics, a concept for force transformation developed by the Joint Chiefs of Staff, identifies seven key joint logistics capability areas such as Joint Deployment/Rapid Distribution. In September 2005, DOD issued its Focused Logistics Roadmap, also referred to as the “As Is” roadmap. It documents logistics-enabling programs and initiatives directed toward achieving focused logistics capabilities. It is intended to provide a baseline of programs and initiatives for future capability analysis and investment. Seven of the 10 initiatives in the DOD supply chain management improvement plan and some of the systems included in the initiative to modernize the department’s business systems—under the Business Transformation Agency—are discussed in the Focused Logistics Roadmap. In September 2005, DOD’s Enterprise Transition Plan was issued as part of the Business Management Modernization Program. The Enterprise Transition Plan is the department’s plan for transforming its business operations. One of the six DOD-wide priorities contained in the Enterprise Transition plan is Materiel Visibility, which is focused on improving supply chain performance. The Materiel Visibility priority is defined as the ability to locate and account for materiel assets throughout their life cycle and provide transaction visibility across logistics systems in support of the joint warfighting mission. Two of the key programs targeting visibility improvement are Radio Frequency Identification and Item Unique Identification, which also appear in the supply chain management improvement plan. The Defense Logistics Agency’s Fiscal Year 2006 Transformation Roadmap contains 13 key initiatives underway to execute DLA’s role in DOD’s overarching transformation strategy. The majority of the initiatives are those that affect supply chain management, and several are found in DOD’s supply chain management improvement plan. For example, the Integrated Data Environment, Business Systems Modernization, and Reutilization Modernization Program initiatives found in DLA’s Transformation Roadmap are also in the department’s supply chain management improvement plan under the initiative to modernize the department’s business systems. These plans were developed at different points of time, for different purposes, and in different formats. Therefore, it is difficult to determine how all the ongoing efforts link together to sufficiently cover requirements forecasting, asset visibility, and materiel distribution and whether they will result in significant progress toward resolving this high-risk area. Moreover, DOD’s supply chain management improvement plan does not account for initiatives outside OSD’s direct oversight that may have an impact on supply chain management. The initiatives chosen for the plan were joint initiatives under the oversight of OSD in the three focus areas of requirements forecasting, asset visibility, and materiel distribution. However, the U. S. Transportation Command, DLA, and the military services have ongoing and planned supply chain improvement efforts in those areas that are not included in the plan. For example, the U.S. Transportation Command’s Joint Task Force – Port Opening initiative seeks to improve materiel distribution by rapidly extending the distribution network into a theater of operations. Furthermore, DLA is implementing a National Inventory Management Strategy, which is an effort to merge distinct wholesale and retail inventories into a national inventory, provide more integrated management, tailor inventory to services’ requirements, and reduce redundant inventory levels. Another example is the Army’s efforts to field two new communications and tracking systems, the Very Small Aperture Terminal and the Mobile Tracking System, to better connect logisticians on the battlefield and enable them to effectively submit and monitor their supply requisitions. DOD officials told us they would be willing to consider adding initiatives that impact the three focus areas. Until DOD clearly aligns the supply chain management improvement plan with other department plans and ongoing initiatives, supply chain stakeholders will not have a comprehensive picture of DOD’s ongoing efforts to resolve problems in the supply chain. Although we are encouraged by DOD’s planning efforts, DOD lacks a comprehensive, integrated, and enterprisewide strategy to guide logistics programs and initiatives. In the past, we have emphasized the need for an overarching logistics strategy that will guide the department’s logistics planning efforts. Without an overarching logistics strategy, the department will be unable to most economically and efficiently support the needs of the warfighter. To address this concern and guide future logistics programs and initiatives, DOD is in the process of developing a new strategic plan—the “To Be” roadmap. This plan is intended to portray where the department is headed in the logistics area, how it will get there, and monitor progress toward achieving its objectives, as well as institutionalize a continuous assessment process that links ongoing capability development, program reviews, and budgeting. According to DOD officials, the initiatives in the supply chain management improvement plan will be incorporated into the “To Be” logistics roadmap. The roadmap is being developed by a working group representing the four services, DLA, the U.S. Transportation Command, the U.S. Joint Forces Command, the Joint Staff, the Business Transformation Agency, and the Office of the Secretary of Defense. The working group reports to a Joint Logistics Group comprised of one-star generals and their equivalents representing these same organizations. Additionally, the Joint Logistics Board, Defense Logistics Board, and the Defense Logistics Executive (the Under Secretary of Defense for Acquisition, Technology, and Logistics) would provide continuous feedback and recommendations for changes to the roadmap. Regarding performance measures, the roadmap would link objective, quantifiable, and measurable performance targets to outcomes and logistics capabilities. The first edition of the “To Be” roadmap is scheduled for completion in February 2007, in conjunction with the submission of the President’s Budget for Fiscal Year 2008. Updates to the roadmap will follow on an annual basis. Efforts to develop the “To Be” roadmap show promise. However, until it is completed, we will not be able to assess how the roadmap addresses the challenges and risks DOD faces in its supply chain improvement efforts. DOD faces significant challenges in improving supply chain management over the coming years. As it develops its “To Be” roadmap for logistics, DOD would likely benefit from including outcome-focused performance measures demonstrating near-term progress in the three focus areas of requirements forecasting, asset visibility, and materiel distribution. With outcome-focused performance measures, DOD will be able to show results in these areas that have been long identified as systemic weaknesses in the supply chain. While we recognize the challenge to developing outcome- focused performance measures at the department level, DOD could show near-term progress with intermediate measures. These measures could include outcome-focused measures for each of the initiatives or for the three focus areas. To be most effective, the roadmap also would reflect the results of analysis of capability gaps between its “As Is” and “To Be” roadmaps, as well as indicate how the department intends to make this transition. DOD would also benefit by showing the alignment among the roadmap, the supply chain management improvement plan, and other DOD strategic plans that address aspects of supply chain management. Clearer alignment of the supply chain management improvement plan with other department plans and ongoing initiatives could provide greater visibility and awareness of actions DOD is taking to resolve problems in the supply chain. In the long term, however, a plan alone will not resolve the problems that we have identified in supply chain management. Actions must result in significant progress toward resolving a high-risk problem before we will remove the high-risk designation. Mr. Chairman and Members of the Subcommittee, this concludes my prepared remarks. I would be happy to answer any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact me at 202- 512-8365 or [email protected]. Individuals making contributions to this testimony include Tom Gosling, Assistant Director; Michael Avenick; Susan Ditto; Marie Mak; Thomas Murphy; Janine Prybyla; and Matthew Spiers. Technology consisting of active or passive electronic tags that are attached to equipment and supplies that are shipped from one location to another and enable shipment tracking. Marking of personal property items with a machine- readable Unique Item Identifier, or set of globally unique data elements, to help DOD value and track items throughout their life cycle. Streamlining of the storage and distribution of materiel within a given geographic area in order to eliminate duplicate materiel handling and inventory layers. An inventory requirements methodology that produces an inventory investment solution that enables higher levels of readiness at an equal or lower cost. An improved war reserve requirements forecasting process. Process of developing a systematic procurement approach to the entire usage cycle of a group of items. Improving the ability of a joint force commander to execute logistics authorities and processes within a theater of operations. Provides Combatant Commands with a joint theater logistics capability (supply, transportation, and distribution) for command and control of forces and materiel moving into and out of the theater. Long-term partnership with a coordinator of transportation management services to improve the reliability, predictability, and efficiency of DOD materiel moving within the continental United States by all modes. Departmentwide initiative to advance business transformation efforts, particularly with regard to business systems modernization. X This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense (DOD) maintains a military force with unparalleled logistics capabilities, but it continues to confront decades-old supply chain management problems. The supply chain can be the critical link in determining whether our frontline military forces win or lose on the battlefield, and the investment of resources in the supply chain is substantial. Because of weaknesses in DOD's supply chain management, this program has been on GAO's list of high-risk areas needing urgent attention and transformation since 1990. Last year, DOD developed a plan to resolve its long-term supply chain problems in three focus areas: requirements forecasting, asset visibility, and materiel distribution. In October 2005, GAO testified that the plan was a good first step. GAO was asked to provide its views on DOD's progress toward (1) implementing the supply chain management improvement plan and (2) incorporating performance measures for tracking and demonstrating improvement, as well as to comment on the alignment of DOD's supply chain management improvement plan with other department logistics plans. This testimony is based on prior GAO reports and ongoing work in this area. It contains GAO's views on opportunities to improve DOD's ability to achieve and demonstrate progress in supply chain management. Since October 2005, DOD has continued to make progress implementing the 10 initiatives in its supply chain management improvement plan, but it will take several years to fully implement these initiatives. DOD's stated goal for implementing its plan is to demonstrate significant improvement in supply chain management within 2 years of the plan's inception in 2005, but the time frames for substantially implementing some initiatives are currently 2008 or later. While DOD has generally stayed on track, it has reported some slippage in the implementation of certain initiatives. Factors such as the long-standing nature of the problems, the complexities of the initiatives, and the involvement of multiple organizations within DOD could cause the implementation dates of some initiatives to slip farther. DOD has incorporated several broad performance measures in its supply chain management improvement plan, but it continues to lack outcome-focused performance measures for many of the initiatives. Therefore, it is difficult to track and demonstrate progress toward improving the three focus areas of requirements forecasting, asset visibility, and materiel distribution. Although DOD's plan includes four high-level performance measures that are being tracked across the department, these measures do not necessarily reflect the performance of the initiatives and do not relate explicitly to the three focus areas. Further, DOD's plan does not include cost metrics that might show efficiencies gained through supply chain improvement efforts. In their effort to develop performance measures for use across the department, DOD officials have encountered challenges such as a lack of standardized, reliable data. Nevertheless, DOD could show near-term progress by adding intermediate measures. These measures could include outcome-focused measures for each of the initiatives or for the three focus areas. DOD has multiple plans aimed at improving aspects of logistics, including supply chain management, but it is unclear how these plans are aligned with one another. The plans were developed at different points of time, for different purposes, and in different formats, so it is difficult to determine how all the ongoing efforts link together to sufficiently cover requirements forecasting, asset visibility, and materiel distribution and whether they will result in significant progress toward resolving this high-risk area. Also, DOD's supply chain management improvement plan does not account for initiatives outside the direct oversight of the Office of the Secretary of Defense, and DOD lacks a comprehensive strategy to guide logistics programs and initiatives. DOD is in the process of developing a new plan, referred to as the "To Be" roadmap, for future logistics programs and initiatives. The roadmap is intended to portray where the department is headed in the logistics area, how it will get there, and what progress is being made toward achieving its objectives, as well as to link ongoing capability development, program reviews, and budgeting. However, until it is completed, GAO will not be able to assess how the roadmap addresses the challenges and risks DOD faces in its supply chain improvement efforts. |
The federal financial regulators are responsible for examining and monitoring the safety and soundness of approximately 22,000 financial institutions, which, together, manage more than $13 trillion in assets and hold over $7 trillion in deposits. Specifically: The Federal Reserve System is responsible for overseeing the Year 2000 activities of 1,618 entities, including 990 state member banks, 349 bank holding companies, 221 foreign bank offices, and 9 Edge Act corporations. According to FRS, these organizations have assets totaling over $7.7 trillion and hold deposits of about $3.6 trillion. FRS’ oversight responsibilities also include 49 service providers, software vendors, and data centers. The Office of the Comptroller of the Currency supervises about 2,600 federally-chartered, national banks and federal branches and agencies of foreign banks, which comprise about $3.5 trillion in assets. OCC is also responsible for monitoring the Year 2000 activities of 109 service providers, software vendors, and data centers. The Federal Deposit Insurance Corporation supervises about 6,000 state-chartered, nonmember banks, which are responsible for about $1 trillion in assets. It is also the deposit insurer of approximately 11,000 banks and savings institutions that have insured deposits totaling upwards of $3.8 trillion. FDIC also oversees 146 service providers, software vendors, and data centers. The Office of Thrift Supervision oversees about 1,200 savings and loan associations (thrifts), which primarily emphasize residential mortgage lending and are an important source of housing credit. These institutions hold approximately $737 billion in assets. The National Credit Union Administration supervises about 7,000 federally-chartered credit unions. It is also the deposit insurer of more than 11,000 federally- and state-chartered credit unions whose assets total about $371 billion. Credit unions are nonprofit financial cooperatives organized to provide their members with low-cost financial services. As part of their goal of maintaining safety and soundness, these regulators are responsible for assessing whether the institutions they supervise are adequately mitigating the risks associated with the century date change. To ensure consistent and uniform supervision on the Year 2000 issue, the five regulators are coordinating their supervisory efforts through FFIEC. Additionally, under the auspices of the FFIEC, the regulators are jointly examining 28 major data service providers and software vendors that support the financial institutions. Each of the regulators, except NCUA, is responsible for a specified number of these joint examinations. Addressing the Year 2000 problem in time has been, and will continue to be, a tremendous challenge for financial institutions and their regulators. Virtually every insured financial institution relies on computers—either their own or those of a contractor—to process and update records and for a variety of other functions. To complicate matters, most institutions have computer systems that interface with systems belonging to payment systems partners, such as wire transfer systems, automated clearinghouses, check clearing providers, credit card merchant and issuing systems, automated teller machine (ATM) networks, electronic data interchange systems, and electronic benefits transfer systems. Because of these interdependencies, financial institutions systems are also vulnerable to failure caused by incorrectly formatted data provided by other systems that are not Year 2000 compliant. In addition, financial institutions depend on public infrastructure, such as telecommunications and power networks, to carry out critical business operations, such as making electronic fund transfers, verifying credit card transactions, and making ATM transactions. However, these networks are also susceptible to Year 2000 problems. Thus, financial institutions must also assess the Year 2000 readiness efforts of their local utilities and telecommunications providers. Financial institutions and their regulators cannot afford to neglect any of these issues. If they do, the impact of Year 2000 failures could be potentially disruptive to vital bank operations and harmful to customers. For example, loan systems could make errors in calculating interest and amortization schedules. In turn, these miscalculations may expose institutions and data centers to financial liability and loss of customer confidence. Moreover, ATMs may malfunction, performing erroneous transactions or refusing to process transactions. Other supporting systems critical to the day-to-day business of financial institutions may be affected as well. For example, telephone systems, vaults, and security and alarm systems could malfunction. Since June 1996, when their Year 2000 oversight efforts began, the five financial institution regulators have taken a number of important steps to alert financial institutions of the risks associated with the Year 2000 problem and to assess what these institutions are doing to mitigate the risks. To raise awareness, the regulators issued letters to financial institutions that described the Year 2000 problem and special risks facing financial institutions and recommended approaches to planning and managing effective Year 2000 programs. In addition, the regulators provided extensive guidance to assist financial institutions in critical Year 2000 tasks, including guidance on (1) contingency planning, (2) mitigating risks associated with critical bank customers (e.g., large borrowers and capital providers), (3) mitigating risks of using data processing servicers and software vendors to perform financial institution operations, (4) testing to demonstrate Year 2000 compliance, (5) establishing effective Year 2000 customer awareness programs, and (6) addressing Year 2000 risks associated with fiduciary services. The regulators have also undertaken extensive outreach efforts—such as establishing Internet sites and conducting seminars nationwide—to raise the Year 2000 awareness of banking industry personnel and the public. To assess what institutions are doing to mitigate Year 2000 risks, the regulators performed a high-level and detailed assessment of bank, thrift, and credit union efforts. The high-level assessment consisted primarily of administering FFIEC’s Year 2000 questionnaire via telephone and on-site visits and was completed during November and December 1997. During this assessment, the regulators examined whether institutions had established a structured process for correcting the problem; estimated the costs of remediation; prioritized systems for correction; and determined the Year 2000 impact on other internal systems important to day-to-day operations, such as vaults, security and alarm systems, elevators, and telephones. The more detailed Year 2000 assessment involved on-site visits to the institutions and was completed in June 1998. These examinations focused on whether institutions were appropriately planning for the Year 2000 effort and addressing risks posed by service providers, software vendors, and large customers. They also began to assess whether institutions had effective customer awareness programs. These exams found the majority of financial institutions are doing an adequate job in addressing the Year 2000 issue. Specifically, according to the regulators, they found that of the over 22,000 institutions with examinations completed by June 30, 1998, almost 93 percent were doing a satisfactory job of addressing their Year 2000 problems, about 7 percent needed improvement, and 0.3 percent were performing unsatisfactorily. The regulators plan to follow up with additional on-site visits that will address the unique—and more difficult—challenges that the testing and implementation phases will present. These exams, which the regulators plan to complete by March 31, 1999, are expected to identify institutions that are experiencing difficulties completing their testing and implementation programs or have not developed sufficient contingency plans. In addition to overseeing the efforts of financial institutions to address the Year 2000 problem, the federal regulators must also ensure that their internal computer systems are Year 2000 compliant. This is especially critical for FRS which operates systems on which the financial institutions heavily rely. For example, according to FRS, the Fedwire system was used by financial institutions in 1997 to make about 89 million funds transfers valued at $288 trillion. While systems belonging to the other regulators are not critical to the day-to-day operation of the banking industry, they support the essential business functions of the regulators, such as personnel management, accounting, budget, travel, and program tracking. As noted earlier, we are currently reviewing FRS’ efforts to remediate its internal systems and plan to report the results of our review separately. However, we have reviewed the efforts of the four other regulators to remediate their systems and found that they have taken many actions that are crucial to successfully dealing with the Year 2000 problem. For example, they have established a good foundation for managing their remediation efforts by developing Year 2000 strategies, designating Year 2000 program managers, inventorying systems, and developing tracking systems to monitor progress and prepare status reports. They are acting or have acted to ensure that core business operations are not disrupted by identifying core business operations, assessing the potential impact of Year 2000-induced failures (including public infrastructure failures) on those operations, prioritizing conversion efforts, and developing contingency plans. The regulators also have identified their data exchanges and are working with their data exchange partners to prevent noncompliant systems from introducing Year 2000 errors into compliant systems. Finally, to ensure that their systems are adequately tested, the regulators have developed Year 2000 testing guidance and have begun or are well underway in testing their systems. In September 1998, each of the regulators reported to the Congress that they are on schedule to meet the Office of Management and Budget’s March 1999 implementation date for their mission-critical systems. Their data indicate that, with continued good management, the regulators should be able to meet this milestone. While the regulators have been working hard to achieve industrywide compliance and remediate their own systems, we have identified concerns and problems with their efforts during the course of our reviews. Specifically, we found that all the regulators were late in initiating their Year 2000 oversight of institutions and in issuing key guidance on business continuity and contingency planning, corporate borrowers, and service providers and software vendors. We also found that the regulators had not assessed whether they had enough information system examiners to adequately oversee the Year 2000 efforts of the institutions they supervise. In addition to these general concerns, we also found problems specific to each agency. However, the regulators have been quick to respond to our recommendations and to implement corrective actions. For example, in October 1997, we made recommendations to NCUA to help it ensure that credit unions were adequately mitigating Year 2000 risks. Among other things, NCUA responded by (1) implementing a quarterly reporting process whereby credit unions would communicate the status of their remediation efforts between examinations, (2) developing a formal, detailed plan for contingencies, (3) instructing credit union management to have their auditors address Year 2000 issues in the scope of their work, and (4) hiring additional contractor support to assist with exams of credit unions and service providers. We also made specific recommendations to FDIC to (1) work with the other FFIEC members to enhance the content of their assessment work program, (2) ensure that adequate resources are allocated to complete the corporation’s internal systems’ assessment by the end of March 1998, and (3) develop contingency plans for each of FDIC’s mission-critical systems and core business processes. Similarly, we recommended that OTS develop contingency plans for each of its mission-critical systems and core business processes. Again, both agencies agreed with our recommendations and took immediate steps to implement them. Despite the regulators’ strong efforts to assess industrywide compliance and remediate their own systems, several complex and difficult challenges remain in achieving Year 2000 compliance. First: the challenge of time. Regardless of good practices and good progress, less than 16 months remain to the century date change. With over 22,000 institutions, vendors, and service providers to examine and monitor, the regulators face a formidable task in continuing to provide adequate coverage. Second: the challenge to provide effective oversight during the later and more complicated stages of the remediation effort. By December 1998, FFIEC expects financial institutions to be well into the testing phase. As noted in our Year 2000 Test Guide, because Year 2000 conversions often involve numerous, large interconnecting systems with many external interfaces and extensive supporting technology infrastructures, testing needs to be approached in a structured and disciplined fashion. According to OCC, for many banks, testing will consume upwards of 60 percent of the cost and time spent to correct Year 2000 problems. Nevertheless, the regulators have a small window of opportunity for assessing institutions during this critical phase: they generally expect to complete on-site exams of service providers, software vendors, and institutions with in-house or complex systems by December 31, 1998, and plan to complete on-site exams for the remaining institutions by March 31, 1999. At the same time, however, they have a limited number of technical examiners to conduct these reviews. OCC, for example, has 79 full-time bank information system examiners responsible for providing assistance to 575 safety and soundness examiners and for examining institutions with complex systems. FRS currently has 73 such examiners—31 full time and 42 part time—that conduct complex examinations while supporting 106 other examiners during their exams. Because of the limited number of technical examiners and the large number of entities to be examined, we have recommended to the regulators that they (1) determine how many technical examiners are needed to adequately oversee the Year 2000 efforts of the institutions, data processing servicers, and software vendors and (2) develop a strategy for obtaining these resources and maintaining their availability. Third: the challenge to develop an effective strategy for dealing with institutions that by all indications will not be viable by the Year 2000. The regulators have not yet (1) defined the criteria for finding that a financial institution will not be viable due to Year 2000 problems or (2) developed a strategy for when and how they will handle such troubled institutions. The regulators have been working on these issues. For example, they are querying data centers and service providers on their capacity to service new clients due to Year 2000 problems and putting together a “bidders list” for Year 2000 purposes that will include institutions that have demonstrated, well-managed Year 2000 programs and are capable of processing acquisitions of other institutions. However, none of these efforts have been finalized. Developing these plans promptly is paramount to minimizing the risk of not having enough time to implement a viable plan for dealing with institutions that cannot successfully complete their efforts. Fourth: the challenge to protect U.S. banks from international Year 2000 risks. U.S. banks have many external links to financial institutions and markets around the world. For example, overseas financial institutions and markets depend on our electronic funds transfer systems and clearinghouses. Unfortunately, it has been reported that many countries are well behind their U.S. counterparts in Year 2000 remediation. For example, a survey of 15,000 companies in 87 countries by the Gartner Group found that nations including Germany, India, Japan, and Russia were 12 months or more behind the United States. Given the fact that many countries are behind schedule in addressing the Year 2000 problem, it will be essential for regulators to (1) ensure that financial institutions have adequately identified and mitigated their international risks and (2) prepare contingency plans for handling disruptions caused by problems abroad. Fifth: the challenge to protect financial institutions from Year 2000 disruptions caused by their telecommunications and power service providers. The most vital business operations of financial institutions—ATM transactions, fund transfers, and credit card authorizations, for example—are dependent on telecommunications and power networks. In fact, according to the President’s National Security Telecommunications Advisory Committee, the financial services industry may be the telecommunications industry’s most demanding customer: over $2 trillion is sent by international wire transfers every day. In June 1998 testimony on the Year 2000 readiness of the telecommunications sector, we reported that most major telecommunications carriers expect to achieve Year 2000 network compliance by December 1998. For a few though, the planned date for compliance is either later than December 1998, or we were unable to obtain this information. The carriers are working to test their networks but until the tests are completed and the results made public, it is not clear to what degree—if any—financial institutions and the public will be subject to telecommunications disruptions. The situation for electric power companies is similar. At the request of the Department of Energy, the North American Electric Reliability Council (NERC) is assessing the readiness of the critical systems within the nation’s electric infrastructure. The Secretary of Energy requested that NERC provide written assurances by July 1, 1999, that critical power systems have been tested, and that such systems will be ready to operate in the year 2000. Until such assessments are completed and results made public, the precise status of this sector is not completely clear. Because of the uncertain nature of electric power and telecommunications Year 2000 readiness, it is essential for regulators and institutions to plan for contingencies should there be service disruptions due to the Year 2000 date change. In conclusion, the regulators have made significant progress in assessing the readiness of member institutions; raising awareness on important issues such as contingency planning, testing, and dealing with service providers, software vendors, and large customers; and remediating their own systems. Looking forward, the challenge is for the regulators to make the best use of limited resources in the time remaining and to ensure that they are ready to take swift actions to address those institutions that falter in the later phases of correction and to address disruptions caused by international and public infrastructure Year 2000 failures. To their credit, the regulators have spent the last year developing a picture of how their industry stands, including which institutions are at high risk of not being ready for the millennium and require immediate attention, which service providers and vendors are likely to be problematic, and the extent of problems remaining. In addition, they have undertaken efforts to determine what conditions will constitute Year 2000 failures and what actions can be taken to quickly address failures. Nevertheless, more needs to be done to prepare for major potential disruptions caused by domestic and international financial institutions, as well as power and telecommunications companies, experiencing processing problems at the century date rollover. Accordingly, we are now recommending that the regulators, working through the FFIEC, (1) finalize by December 1, 1998, their plans for dealing with institutions that will be not be viable due to Year 2000 problems and (2) develop contingency plans that address international and public infrastructure Year 2000 risks. Mr. Chairman, this concludes my statement. We welcome any questions that you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the year 2000 risks facing financial institutions and the federal regulators, focusing on the: (1) actions taken to date to mitigate these risks; and (2) challenges that lay ahead as institutions and regulators face the more complex and difficult activities of their year 2000 programs. GAO noted that: (1) the regulators have made good progress in assisting banks, thrifts, and credit unions in their year 2000 efforts as well as identifying which institutions are at a high risk of not remediating their systems on time; (2) they have also recognized the risk and potential impact of year 2000-induced system failures on their own core business processes and have implemented rigorous efforts to mitigate these risks; (3) nevertheless, there are still serious challenges ahead that could threaten the financial institution industry's ability to successfully meet the year 2000 deadline; (4) with less than 16 months remaining before the year 2000 deadline, the regulators are faced with the daunting task of overseeing the efforts of more than 22,000 financial institutions, service providers, and software vendors with a relatively finite number of examination personnel; (5) in the next few months, many of these entities will be undertaking the most complex and difficult stage of correction--testing; (6) it will be necessary for regulators to ensure that they have enough technical resources to review institution efforts during this crucial phase; (7) beginning in early 1999, regulators will be pressed to take quick actions against institutions that cannot successfully complete their year 2000 efforts; (8) but before they can do so, they need to determine what will constitute financial institution year 2000 failures, what regulatory options can be effectively used, and when they would be implemented; (9) the U.S. economy is intrinsically linked to the international banking and financial services sector, yet many countries and their financial institutions are reported to be behind schedule in addressing their year 2000 problem; (10) working with their foreign counterparts, the regulators will need to identify and define global year 2000 risks and work cooperatively to mitigate those risks; (11) the regulators will also need to develop contingency plans in case there are unforeseen problems; (12) financial institution credit, deposit, and payment flows are critically dependent on public infrastructure such as telecommunications and electric power networks; (13) however, until critical readiness assessments and tests are completed and made available to the public, it is not clear whether there will be uninterrupted telecommunications and power service; and (14) regulators will need to develop contingency plans that anticipate year 2000-related disruptions in the public infrastructure. |
Traditionally, real estate brokers have offered a full, “bundled” package of services to sellers and buyers, including marketing the seller’s home or assisting in the buyer’s search, holding open houses and showing homes, preparing offers and assisting in negotiations, and coordinating the steps to close the transaction. Because real estate transactions are complex and infrequent for most people, many consumers benefit from a broker’s specialized knowledge of the process and of local market conditions. Still, some consumers choose to complete real estate transactions without a broker’s assistance, including those who sell their properties on their own, or “for-sale-by-owner.” For many years, the industry has used a commission-based pricing model, with sellers paying a percentage of the sales price as a brokerage fee. Brokers acting for sellers typically invite other brokers to cooperate in the sale of the property and offer a portion of the total commission to whoever produces the buyer. Agents involved in the transaction may be required to split their shares of the commission with their brokers. Under this approach, brokers and agents receive compensation only when sales are completed. In recent years, alternatives to this traditional full-service brokerage model have become more common, although industry analysts and participants told us that these alternatives still represented a small share of the overall market in 2005. Discount full-service brokerages charge a lower commission than the prevailing local rate, but offer a full package of services. Discount limited-service brokerages offer a limited package of services or allow clients to choose from a menu of “unbundled” services and charge reduced fees on a commission or fee-for-service basis. Most local real estate markets have an MLS that pools information about homes that area brokers have agreed to sell. Participating brokers use an MLS to “list” the homes they have for sale, providing other brokers with detailed information on the properties (“listings”), including how much of the commission will be shared with the buyer’s agent. An MLS serves as a single, convenient source of information that provides maximum exposure for sellers and facilitates the home search for buyers. Each MLS is a private entity with its own membership requirements and operating policies and procedures. According to NAR, approximately 900 MLSs nationwide were affiliated with the trade association in 2005. These NAR- affiliated MLSs are expected to follow NAR’s model guidelines for various operational and governance issues, such as membership requirements and rules for members’ access to and use of listing information. An MLS that is not affiliated with NAR is not bound by these guidelines. Individual states regulate real estate brokerage, establishing licensing and other requirements for brokers and agents. Of the two categories of state- licensed real estate practitioners, brokers generally manage their own offices, and agents, or salespeople, must work for licensed brokers. States generally require brokers to meet more educational requirements than agents, have more experience, or both. For the purposes of this statement, I will generally refer to all licensed real estate practitioners as brokers. Some economists have observed that brokers typically compete more on nonprice factors, such as service quality, than on price. While comprehensive price data are lacking, evidence from academic literature and industry participants with whom we spoke highlight several factors that could limit the degree of price competition, including broker cooperation, largely through MLSs, which can discourage brokers from competing with one another on price; resistance from traditional full- service brokers to brokers who offer discounted prices or limited services; and state antirebate and minimum service laws and regulations, which some argue may limit pricing and service options for consumers. The real estate brokerage industry has a number of attributes that economists normally associate with active price competition. Most notably, the industry has a large number of brokerage firms and individual licensed brokers and agents—approximately 98,000 active firms and 1.9 million active brokers and agents in 2004, according to the Association of Real Estate License Law Officials. Although some local markets are dominated by 1 or a few large firms, market share in most localities is divided among many small firms, according to industry analysts. In addition, the industry has no significant barriers to entry, since obtaining a license to engage in real estate brokerage is relatively easy and the capital requirements are relatively small. While real estate brokerage has competitive attributes, with a large number of players competing for a limited number of home listings, much of the academic literature and some industry participants we interviewed described this competition as being based more on nonprice variables, such as quality, reputation, or level of service, than on price. One reason for this characterization is the apparent uniformity of commission rates. Comprehensive data on brokerage fees are lacking. However, past analyses and anecdotal information from industry analysts and participants indicate that, historically, commission rates were relatively uniform across markets and over time. Various studies using data from the late 1970s through the mid-1980s found evidence that the majority of listings in many communities clustered around the same rate, exactly 6 percent or 7 percent. Although these studies and observations do not indicate that there has been complete uniformity in commission rates, they do suggest that variability has been limited. Many of the industry analysts and participants we interviewed said that commissions still cluster around a common rate within most markets, and they generally cited rates of 5 percent to 6 percent as typical. Some economists have cited certain advantages to the commission-based model that is common in real estate brokerage, most notably that it provides sellers’ brokers with an incentive to get the seller the highest possible price. Moreover, uniformity in commission rates within a market at a given time does not necessarily indicate a lack of price competition. But some economists have noted that in a competitive marketplace, real estate commission rates could reasonably be expected to vary across markets or over time—that is, to be more sensitive to housing market conditions than has been traditionally observed. For example, commission rates within a market at a given time do not appear to vary significantly on the basis of the price of the home. Thus, the brokerage fee, in dollar terms, for selling a $300,000 home is typically about three times the fee for selling a $100,000 home, although the time or effort required to sell the two homes may not differ substantially. Similarly, commission rates do not appear to have changed as much as might be expected in response to rapidly rising home prices in recent years. Between 1998 and 2005, the national median sales price of existing homes, as reported by NAR, increased about 74 percent, while inflation over the same period was about 16 percent, leaving an increase of some 58 percent in the inflation- adjusted price of housing. According to REAL Trends, average commission rates among the largest brokerage firms fell from an estimated 5.5 percent in 1998 to an estimated 5.0 percent in 2005, a decrease of about 9 percent. Thus, with the increase in housing prices, the brokerage fee (in dollars) for selling a median-priced home increased even as the commission rate fell. Some economists have suggested that uniformity in commission rates can lead brokers to compete on factors other than price in order to gain market share. For example, brokers might hire more agents in an effort to win more sellers’ listings. Brokers may also compete by spending more on advertising or offering higher levels of service to attract clients. Although some of these activities can benefit consumers, some economic literature suggested that such actions lead to inefficiency because brokerage services could be provided by fewer agents or at a lower cost. To the extent that commission rates may have declined slightly in recent years, the change may be the result in part of rapidly rising home prices, which have generated higher brokerage industry revenues even with lower commission rates. However, competition from increasing numbers of discount, fee-for-service, and other nontraditional brokerage models may have also contributed to the decline. These nontraditional models typically offer lower fees, and although NAR consultants estimated that nontraditional firms represented only about 2 percent of the market in 2003, these firms may be putting some downward pressure on the fees charged by traditional brokerages. Factors related to the cooperation among brokers facilitated by MLSs, some brokers’ resistance to discounters, and consumer attitudes may inhibit price competition within the real estate brokerage industry. First, while MLSs provide important benefits to consumers by aggregating data on homes for sale and facilitating brokers’ efforts to bring buyers and sellers together, the cooperative nature of the MLS system can also in effect discourage brokers from competing with one another on price. Because participating in an MLS in the areas where they exist is widely considered essential to doing business, brokerage firms may have an incentive to adopt practices that comply with MLS policies and customs. As previously noted, MLSs facilitate cooperation in part by enabling brokers to share information on the portion of the commission that sellers’ brokers are offering to buyers’ brokers. In the past, some MLSs required participating brokers to charge standard commission rates, but this practice ended after the Supreme Court ruled, in 1950, that an agreement to fix minimum prices was illegal under federal antitrust laws. Subsequently, some MLSs adopted suggested fee schedules, but this too ended after DOJ brought a series of antitrust actions in the 1970s alleging that this practice constituted price fixing. Today, MLSs no longer establish standard commission rates or recommend how commissions should be divided among brokers. MLS listings do show how much sellers’ brokers will pay other brokers for cooperating in a sale, according to industry participants. When choosing among comparable homes for sale, brokers have a greater incentive—all else being equal—to first show prospective buyers homes that offer other brokers the prevailing commission rate, rather than homes that offer a lower rate. Therefore, even without formal policies to maintain uniform rates, individual brokers’ reliance on the cooperation of other brokers to bring buyers to listed properties may help maintain a standard commission rate within a local area, at least for buyers’ brokers. FTC, in a 1983 report, concluded that the cooperative nature of the industry and the interdependence among brokers were the most important factors explaining the general uniformity in commission rates that it had observed in many markets in the late 1970s. Second, traditional brokers may discourage price competition by resisting cooperation with brokers and firms whose business models depart from charging conventional commission rates, according to several industry analysts and participants with whom we spoke. A discount broker may advertise a lower commission rate to attract listings, but the broker’s success in selling those homes, and in attracting additional listings in the future, depends in part on other brokers’ willingness to cooperate (by showing the homes to prospective buyers) in the sale of those listings. Some discount full-service and discount limited-service brokerage firms we interviewed said that other brokers had refused to show homes listed by discounters. In addition, traditional brokers may in effect discourage discount brokers from cooperating in the sale of their listings by offering discounters a lower buyer’s broker commission than the prevailing rate offered to other brokers. This practice can make it more difficult for discount brokers to recruit new agents because the agents may earn more working for a broker who receives the prevailing commission from other brokers. Some traditional full-service brokers have argued that discount brokers often do less of the work required to complete the transaction and, thus, deserve a smaller portion of the seller’s commission. Representatives of discount brokerages told us they believed that reduced commission offers are in effect “punishment” for offering discounts to sellers and are intended as signals to other brokers to conform to the typical pricing in their markets. Finally, pressure from consumers for lower brokerage fees appears to have been limited, although it may be increasing, according to our review of economics literature and to several industry analysts and participants. Some consumers may accept a prevailing commission rate as an expected cost, in part because that has been the accepted pricing model for so long, and others may not realize that rates can be negotiated. Buyers may have little concern about commission rates because sellers directly pay the commissions. Sellers may be reluctant to reduce the portion of the commission offered to buyers’ brokers because doing so can reduce the likelihood that their homes will be shown. In addition, home sellers who have earned large profits as housing prices have climbed in recent years may have been less sensitive to the price of brokerage fees. However, some brokers and industry analysts noted that the growth of firms offering lower commissions or flat fees has made an increasing number of consumers aware that there are alternatives to traditional pricing structures and that commission rates are negotiable. Although state laws and regulations related to real estate licensing can protect consumers, DOJ and FTC have expressed concerns that laws and regulations that restrict rebates to consumers or require minimum levels of service by brokers may also unnecessarily hinder competition among brokers and limit consumer choice. As of July 2006, at least 12 states appeared to prohibit, by law or regulation, real estate brokers from giving consumers rebates on commissions or appeared to place restrictions on this practice. Proponents said such laws and regulations help ensure that consumers choose brokers on the basis of the quality of service as well as price, rather than just on the rebate being offered. Opponents of antirebate provisions argued that such restrictions serve only to limit choices for consumers and to discourage price competition by preventing brokers from offering discounts. Opponents also noted that offering a rebate is one of the few ways to reduce the effective price of buyer brokerage services, since commissions are typically paid wholly by the seller. In November 2005, DOJ and the Kentucky Real Estate Commission settled a suit in which DOJ had alleged that the commission’s administrative regulation banning rebates violated federal antitrust law. In its complaint, DOJ argued that the regulation unreasonably restrained competition to the detriment of consumers, making it more difficult for them to obtain lower prices for brokerage services. Pursuant to the approved settlement agreement, the commission put in place emergency regulations permitting rebates and other inducements as long as they are disclosed in writing. In addition, as of July 2006, 12 states appeared to be considering or to have passed legislation that requires brokers to provide a minimum level of service when they represent consumers. Such provisions generally require that when a broker agrees to act as a consumer’s exclusive representative in a real estate transaction, the broker must provide such services as assistance in delivering and assessing offers and counteroffers, negotiating contracts, and answering questions related to the purchase and sale process. Advocates of minimum service standards argued that they protect consumers by ensuring that brokers provide a basic level of assistance. Furthermore, full-service brokers argued that such standards prevent them from having to unfairly shoulder additional work when the other party uses a limited-service broker. Opponents of these standards argued that they restrict consumer choice and raise costs by impeding brokerage models that offer limited services for a lower price. Between April and November 2005, DOJ wrote to state officials in Oklahoma and New Mexico, and DOJ and FTC jointly wrote to officials in Alabama, Michigan, Missouri, and Texas discouraging adoption of these states’ proposed minimum service laws and regulations. The letters argued that the proposed standards in these states would likely harm consumers by preventing brokers from offering certain limited-service options and therefore requiring some sellers to buy brokerage services they would otherwise choose to perform themselves. They also cited a lack of evidence that consumers have been harmed by limited-service brokerage. Despite the concerns raised by DOJ and FTC, the governors in Alabama, Missouri, Oklahoma, and Texas subsequently signed minimum service standards into law. The Internet has increased consumers’ access to information about properties for sale and has facilitated new approaches to real estate transactions. Whether the Internet will be more widely used in real estate brokerage depends in part on the extent to which listing information is widely available. Like discount brokerages, Internet-oriented brokerage firms, especially those offering discounts, may also face resistance from traditional brokers and especially may be affected by state laws that prohibit them from offering rebates to consumers. The Internet allows consumers direct access to listing information that has traditionally been available only from brokers. Before the Internet was widely used to advertise and display property listings, MLS data (which comprise a vast majority of all listings) were compiled in an “MLS book” that contained information on the properties listed for sale with MLS- member brokers in a given area. In order to view the listings, buyers generally had to use a broker, who provided copies of listings that met the buyer’s requirements via hard copy or fax. Today, information on properties for sale—either listed on an MLS or independently, such as for- sale-by-owner properties—is routinely posted on Web sites, often with multiple photographs or virtual tours. Thus, the Internet has allowed buyers to perform much of the search and evaluation process independently, before contacting a broker. Sellers of properties can also benefit from the Internet because it can give their listings more exposure to buyers. Sellers may also use the Internet to research suitable asking prices for their homes by comparing the attributes of their houses with others listed in their areas. Although Internet-oriented brokerages and related firms represented only a small portion of the real estate brokerage market in 2005, the Internet has made different service and pricing options more widely available to consumers. Among these options are full-service and limited-service discount brokerages, information and referral companies, and alternative listing Web sites. Full-service discount brokerages offer buyers and sellers full-service real estate brokerage services but advertise lower than traditional commissions, for example between 3 percent and 4.5 percent. These types of brokerages existed before widespread use of the Internet, but many have gained exposure and become more viable as a result of the Internet. In addition, by posting listings online, displaying photographs and virtual tours of homes for sale, and communicating with buyers and sellers by e- mail, some of these companies say that they have been able to cut brokerage costs. Limited-service discount brokerages provide fewer services than full- service brokerages but also charge lower commissions or offer their services for flat fees. For example, some firms charge a flat fee for marketing and advertising homes and, for additional fees, will list a property in the MLS and show the home to prospective buyers. The Internet has allowed these firms to grow in number and size in recent years, in part because they can market their services to a larger population of buyers and sellers. Information and referral companies provide resources for buyers and sellers—such as home valuation tools and access to property listings—and make referrals of those consumers to local brokers. Some of these companies charge referral fees to brokers and then rebate a portion of that fee back to buyers and sellers. The Internet allows these companies to efficiently reach potential consumers and offer those customers services and access to brokers. Alternative listing Web sites offer alternatives to the MLS, allowing sellers who want to sell their homes themselves to advertise their properties to buyers and giving buyers another source of information on homes for sale. These alternative listing sites include the Web sites of local newspapers, Craigslist, and “for-sale-by-owner” Web sites. Several factors could limit the extent to which the Internet is used in real estate transactions. A key factor is the extent to which information about properties listed in an MLS is widely available. Currently, buyers may view MLS-listed properties on many Web sites, including broker and MLS Web sites and on NAR’s Realtor.com Web site. The real estate brokerage industry has faced controversy over the public availability of listings on the Internet and over whether brokers can restrict the display of their listings on other brokers’ Web sites. Proponents of allowing such restrictions argued that listings are the work product, and thus the property, of the selling broker, who should have control over how the listings are used. Opponents argued that such control would unfairly limit Internet-oriented brokers’ ability to provide their clients with access to MLS listings through their Web sites. Even with few restrictions on the availability of information about properties for sale, Internet-oriented brokerage firms may face other challenges. First, Internet-oriented brokers with whom we spoke described resistance, similar to that previously described, involving some traditional brokerages that refused to show the Internet-oriented brokerages’ listed properties or offered them buyers’ brokers commissions that were less than those offered to other brokers. However, the online availability of listing information may discourage such behavior by enabling buyers to more easily detect whether a broker is avoiding other brokers’ listings that are of interest. Second, some Internet-oriented companies said that state antirebate laws and regulations could affect them disproportionately, since their business models often were built around such rebates. Finally, other factors, such as the lack of a uniform technology to facilitate related processes—such as inspection, appraisal, financing, title search, and settlement—may inhibit the use of the Internet for accomplishing the full range of activities needed for real estate transactions. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact David G. Wood at (202) 512-8678. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jason Bromberg, Tania Calhoun, Julianne Stephens Dieterich, and Cory Roman. This bibliography includes articles from our review of literature on the structure and competitiveness of the residential real estate brokerage industry. Anglin, P. and R. Arnott. “Are Brokers’ Commission Rates on Home Sales Too High? A Conceptual Analysis.” Real Estate Economics, vol. 27, no. 4 (1999): 719-749. Arnold, M.A. “The Principal-Agent Relationship in Real Estate Brokerage Services.” Journal of the American Real Estate and Urban Economics Association, vol. 20, no. 1 (1992): 89-106. Bartlett, R. “Property Rights and the Pricing of Real Estate Brokerage.” The Journal of Industrial Economics, vol. 30, no. 1 (1981): 79-94. Benjamin, J.D., G.D. Jud and G.S. Sirmans. “Real Estate Brokerage and the Housing Market: An Annotated Bibliography.” Journal of Real Estate Research, vol. 20, no. 1/2 (2000): 217-278. ——- “What Do We Know about Real Estate Brokerage?” Journal of Real Estate Research, vol. 20, no. 1/2 (2000): 5-30. Carney, M. “Costs and Pricing of Home Brokerage Services.” AREUEA Journal, vol. 10, no. 3 (1982): 331-354. Crockett, J.H. “Competition and Efficiency in Transacting: The Case of Residential Real Estate Brokerage.” AREUEA Journal, vol. 10, no. 2 (1982): 209-227. Delcoure, N. and N.G. Miller. “International Residential Real Estate Brokerage Fees and Implications for the US Brokerage Industry.” International Real Estate Review, vol. 5, no. 1 (2002): 12-39. Epley, D.R. and W.E. Banks. “The Pricing of Real Estate Brokerage for Services Actually Offered.” Real Estate Issues, vol. 10, no. 1 (1985): 45-51. Federal Trade Commission. The Residential Real Estate Brokerage Industry, vol. 1 (Washington, D.C.: 1983). Goolsby, W.C. and B.J. Childs. “Brokerage Firm Competition in Real Estate Commission Rates.” The Journal of Real Estate Research, vol. 3, no. 2 (1988): 79-85. Hsieh, C. and E. Moretti. “Can Free Entry Be Inefficient? Fixed Commissions and Social Waste in the Real Estate Industry.” The Journal of Political Economy, vol. 111, no. 5 (2003): 1076-1122. Jud, G.D. and J. Frew. “Real Estate Brokers, Housing Prices, and the Demand for Housing.” Urban Studies, vol. 23, no. 1 (1986): 21-31. Knoll, M.S. “Uncertainty, Efficiency, and the Brokerage Industry.” Journal of Law and Economics, vol. 31, no. 1 (1988): 249-263. Larsen, J.E. and W.J. Park. “Non-Uniform Percentage Brokerage Commissions and Real Estate Market Performance.” AREUEA Journal, vol. 17, no. 4 (1989): 422-438. Mantrala, S. and E. Zabel. “The Housing Market and Real Estate Brokers.” Real Estate Economics, vol. 23, no. 2 (1995): 161-185. Miceli, T.J. “The Multiple Listing Service, Commission Splits, and Broker Effort.” AREUEA Journal, vol. 19, no. 4 (1991): 548-566. ——- “The Welfare Effects of Non-Price Competition Among Real Estate Brokers.” Journal of the American Real Estate and Urban Economics Association, vol. 20, no. 4 (1992): 519-532. Miceli, T.J., K.A. Pancak and C.F. Sirmans. “Restructuring Agency Relationships in the Real Estate Brokerage Industry: An Economic Analysis.” Journal of Real Estate Research, vol. 20, no. 1/2 (2000): 31-47. Miller, N.G. and P.J. Shedd. “Do Antitrust Laws Apply to the Real Estate Brokerage Industry?” American Business Law Journal, vol. 17, no. 3 (1979): 313-339. Munneke, H.J. and A. Yavas. “Incentives and Performance in Real Estate Brokerage.” Journal of Real Estate Finance and Economics, vol. 22, no. 1 (2001): 5-21. Owen, B.M. “Kickbacks, Specialization, Price Fixing, and Efficiency in Residential Real Estate Markets.” Stanford Law Review, vol. 29, no. 5 (1977): 931-967. Schroeter, J.R. “Competition and Value-of-Service Pricing in the Residential Real Estate Brokerage Market.” Quarterly Review of Economics and Business, vol. 27, no. 1 (1987): 29-40. Sirmans, C.F. and G.K. Turnbull. “Brokerage Pricing under Competition.” Journal of Urban Economics, vol. 41, no. 1 (1997): 102-117. Turnbull, G.K. “Real Estate Brokers, Nonprice Competition and the Housing Market.” Real Estate Economics, vol. 24, no. 3 (1996): 293-316. Yavas, A. “Matching of Buyers and Sellers by Brokers: A Comparison of Alternative Commission Structures.” Real Estate Economics, vol. 24, no. 1 (1996): 97-112. Yinger, J. “A Search Model of Real Estate Broker Behavior.” The American Economic Review, vol. 71, no. 4 (1981): 591-605. Zumpano, L.V. and D.L. Hooks. “The Real Estate Brokerage Market: A Critical Reevaluation.” AREUEA Journal, vol. 16, no. 1 (1988): 1-16. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Consumers paid an estimated $65.7 billion in residential real estate brokerage fees in 2005. Observing that commission rates have remained relatively uniform--regardless of market conditions, home prices, or the effort required to sell a home--some economists have questioned the extent of price competition in the residential real estate brokerage industry. Furthermore, while the Internet offers time and cost savings to the process of searching for homes, Internet-oriented brokerage firms account for only a small share of the brokerage market. This has raised concerns about potential barriers to greater use of the Internet in real estate brokerage. In this testimony, which is based on a report issued in August 2005, GAO discusses (1) factors affecting price competition in the residential real estate brokerage industry and (2) the status of the use of the Internet in residential real estate brokerage and potential barriers to its increased use. The residential real estate brokerage industry has competitive attributes, but its competition appears to be based more on nonprice factors, such as reputation or level of service, than on brokerage fees, according to a review of the academic literature and interviews with industry analysts and participants. Although comprehensive data on brokerage fees are lacking, past analyses and anecdotal information suggest that commission rates have persisted in the same range over long periods, regardless of local market conditions, housing prices, or the cost or the effort required to sell a home. One potential cause of limited price variation in the industry is the use of multiple listing services (MLS), which facilitates cooperation among brokers in a way that can benefit consumers but may also discourage participating brokers from deviating from conventional commission rates. For instance, an MLS listing gives brokers information on the commission that will be paid to the broker who brings the buyer to that property. This practice potentially creates a disincentive for home sellers or their brokers to offer less than the prevailing rate, since buyers' brokers may show high-commission properties first. In addition, some state laws and regulations may also affect price competition, such as those prohibiting brokers from giving clients rebates on commissions and those requiring brokers to provide consumers with a minimum level of service. Although such provisions can protect consumers, the Department of Justice and the Federal Trade Commission have argued that they may prevent price competition or reduce consumers' choice of brokerage services. The Internet has changed the way consumers look for real estate and has facilitated the growth of alternatives to traditional brokers. A variety of Web sites allows consumers to access property information that once was available only by contacting brokers directly. The Internet also has fostered the growth of nontraditional residential real estate brokerage models, including discount brokers and broker referral services. However, industry participants and analysts cited several potential obstacles to more widespread use of the Internet in real estate transactions, including restrictions on listing information on Web sites, some traditional brokers' resistance to cooperating with nontraditional firms, and certain state laws and regulations that prohibit or restrict commission rebates to consumers. |
The concept of the single audit was created to replace multiple grant audits with one audit of an entity as a whole. The single audit is an organizationwide audit that focuses on internal control and the recipient’s compliance with laws and regulations governing the federal financial assistance received. The objectives of the Single Audit Act, as amended, are to promote sound financial management, including effective internal controls, with respect to federal awards administered by nonfederal entities; establish uniform requirements for audits of federal awards administered by nonfederal entities; promote the efficient and effective use of audit resources; reduce burdens on state and local governments, Indian tribes, and ensure that federal departments and agencies, to the maximum extent practicable, rely upon and use audit work done pursuant to the act. We studied the single audit process, and in June 1994, we reported on financial management improvements resulting from single audits, areas in which the single audit process could be improved, and ways to maximize the usefulness of single audit reports. We recommended refinements to improve the usefulness of single audits through more effective use of single audit resources and enhanced single audit reporting, and in March 1996, we testified before this Subcommittee on the proposed refinements. Subsequently, in July 1996, the refinements to the 1984 act were enacted. The 1996 amendments were effective for audits of recipients for fiscal years ending June 30, 1997, and after. The refinements cover a range of fundamental areas affecting the single audit process and single audit reporting, including provisions to extend the law to cover all recipients of federal financial assistance, ensure a more cost-beneficial threshold for requiring single audits, more broadly focus audit work on the programs that present the greatest financial risk to the federal government, provide for timely reporting of audit results, provide for summary reporting of audit results, promote better analyses of audit results through establishment of a federal clearinghouse and an automated database, and authorize pilot projects to further streamline the audit process and make it more useful. In June 1997, OMB issued Circular A-133, Audits of States, Local Governments, and Non-Profit Organizations. The Circular establishes policies to guide implementation of the Single Audit Act 1996 amendments and provides an administrative foundation for uniform audit requirements for nonfederal entities that administer federal awards. OMB also issued a revised OMB Circular A-133 Compliance Supplement. The Compliance Supplement identifies for single auditors the key program requirements that federal agencies believe should be tested in a single audit and provides the audit objective and suggested audit procedures for testing those requirements. We reported in our 1994 report that the Compliance Supplement had not kept pace with changes to program requirements, and had only been updated once since it was issued in 1985. We recommended that the Compliance Supplement be updated at least every 2 years. OMB is now updating this supplement on a more regular basis. The initial Compliance Supplement for audits under the 1996 amendments was issued in June 1997. A revision was issued for June 1998 audits in May 1998, and a revision for June 1999 audits was just recently finalized. We commend OMB for its leadership in developing and issuing the guidance and the collaborative efforts of the federal inspectors general, federal and state program managers, the state auditors, and the public accounting profession in working with OMB proactively to ensure that the guidance effectively implements the 1996 refinements. Highlighted below are several of the key refinements and some of the actions taken to implement them. The 1984 act did not cover colleges, universities, hospitals, or other nonprofit recipients of federal assistance. Instead, audit requirements for these entities were established administratively in a separate OMB audit circular, which in some ways was inconsistent with the audit circular that covered state and local governments. For example, the criteria for determining which programs received detailed audit coverage were different between the circulars. The 1996 amendments expanded the scope of the act to include nonprofit organizations. To implement the 1996 amendments, OMB combined the two audit circulars into one that provided consistent audit requirements for all recipients. The 1996 refinements and OMB Circular A-133 require a single audit for entities that spend $300,000 or more in federal awards, and exempt any entity that spends less than that amount in federal awards. Also, the threshold is based on expenditures rather than receipts. The Congress intended for the entities receiving the greatest amount of federal financial assistance disbursed each year to be audited while exempting entities receiving comparatively small amounts of federal assistance. To achieve this, a $100,000 single audit threshold was included in the 1984 act. The fixed threshold, however, did not take into account future increases in amounts of federal financial assistance. As a result, over time, audit resources were being expended on entities receiving comparatively small amounts of federal financial assistance. In 1984, we reported that setting the threshold for requiring single audits at $100,000 would result in 95 percent of all direct federal financial assistance being covered by single audits. In 1994, we reported that coverage at the same 95 percent level could be achieved with a $300,000 threshold. Also, the refinements require the Director of OMB to biennially review the threshold dollar amount for requiring single audits. The Director may adjust upward the dollar limitation consistent with the Single Audit Act’s purpose. We supported such a provision when the amendments were being considered by the Congress. Exercising this authority in the future will allow the flexibility for the OMB Director to administratively maintain the single audit threshold at a reasonable level without the need for further periodic congressional intervention. As a result of these changes, audit attention is focussed more on entities receiving the largest amounts of federal financial assistance, while the audit burden is eliminated for many entities receiving relatively small amounts of assistance. For example, Pennsylvania reported that this change will still provide audit coverage for 94 percent of the federal funds spent at the local level in the state, while eliminating audit coverage for approximately 1,200 relatively smaller entities in the state. The 1996 amendments require auditors to use a risk-based approach to determine which programs to audit during a single audit. The 1984 act’s criteria for selecting entities’ programs for testing were based only on dollar amounts. The 1996 amendments require OMB to prescribe the risk-based criteria. OMB Circular A-133 prescribes a process to guide auditors based not only on dollar limitations but also on risk factors associated with programs, including entities’ current and prior audit experience with federal programs; the results of recent oversight visits by federal, state, or local agencies; inherent risk of the program. For practical reasons related to the audit procurement process, OMB Circular A-133 allowed auditors to forgo using the risk criteria in the first year audits under the 1996 amendments. Therefore, the risk-based approach will be fully implemented in the second cycle of audits under the 1996 amendments, which started with audits for fiscal years ending June 30, 1998, and is currently in progress. When fully and effectively implemented, this refinement is intended to give auditors greater freedom in targeting risky programs by allowing auditors to use their professional judgment in weighing risk factors to decide whether a higher risk program should be covered by the single audit. Under the 1984 act, OMB guidance provided entity management with a maximum of 13 months from the close of the period audited to submit the audit report to the federal government. The 1996 refinements reduce this maximum time frame to 9 months after the end of the period audited. The amendments provide for a 2-year transition period for meeting the 9-month submission requirement. OMB’s guidelines call for the first audits subject to the revised reporting time frame to be those covering entities’ fiscal years beginning on or after July 1, 1998, and ending June 30, 1999, or after. This means that March 31, 2000, will be the first due date under the new time frame. When fully implemented, this change will improve the timeliness of single audit report information available to federal program mangers who are accountable for administering federal assistance programs. The Congress and federal oversight officials will receive more current information on the recipients’ stewardship of federal assistance funds they receive. The 1996 amendments require that the auditor include in a single audit report a summary of the auditor’s results regarding the nonfederal entity’s financial statements, internal controls, and compliance with laws and regulations. This should allow recipients of single audit reports to focus on the message and critical information resulting from the audit. OMB Circular A-133 requires that a summary of the audit results be included in a schedule of findings and questioned costs. In 1994, we reported that neither the Single Audit Act nor OMB’s implementing guidance then in effect prescribed the format for conveying the results of the auditors’ tests and evaluations. At that time, we found that single audit reports contained a series of as many as eight or more separate reports, including five specifically focused on federal financial assistance, and that significant information was scattered throughout the separate reports. OMB Circular A-133 provides greater flexibility on the organization of the auditor’s reporting than was previously provided. Taking advantage of this flexibility, the American Institute of Certified Public Accountants has issued guidance for practitioners conducting single audits that allows all auditor reporting on federal assistance programs to be included in one report and a schedule of findings and questioned costs. The 1996 refinements call for single audit reports to be provided to a federal clearinghouse designated by the Director of OMB to receive the reports and to assist OMB in carrying out its responsibilities through analysis of the reports. The Bureau of the Census was identified as the Federal Audit Clearinghouse in OMB Circular A-133. In our 1994 report, we noted that data on the results of single audits were not readily accessible and discussed the benefits of compiling the results in an automated database. The clearinghouse has developed a database and is now entering data from the single audit reports it has received. As this initiative progresses, it is expected to become a valuable source of information for OMB, federal oversight officials, and others regarding the expenditure of federal assistance. The 1996 amendments allow the Director of OMB to authorize pilot projects to test ways of further streamlining and improving the usefulness of single audits. We understand that OMB has recently approved the first pilot project under this authority. This first pilot, which was proposed by and will be carried out by the State of Washington, provides for auditing the state education agency and all school districts in the state as one combined entity, rather than having about 200 separate single audits. The Washington State Auditor’s office has submitted a statement for the record that describes in more detail the pilot project. Our preliminary view is that the pilot has the potential to both streamline the audit process and to provide a single report that is more useful to users than the approximately 200 reports it will replace. We fully support testing options for streamlining and increasing the effectiveness of single audits and will monitor this and any other pilot projects that are approved in the future. We are committed to overseeing the successful implementation of the 1996 amendments, working closely with all stakeholders in the single audit process and periodically providing information to the Congress on the progress being made on all of the refinements. Mr. Chairman, this concludes my statement. I will be glad to answer any questions you or other Members may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the status of efforts to implement the Single Audit Act Amendments of 1996, focusing on: (1) the importance of the 1996 amendments; (2) the actions taken to implement them; and (3) ways in which the refinements will continue to evolve and benefit future single audit efforts. GAO noted that: (1) the concept of the single audit was created to replace multiple grant audits with one audit of an entity as a whole; (2) the objectives of the Single Audit Act, as amended, are to: (a) promote sound financial management, including effective internal controls, with respect to federal awards administered by non-federal entities; (b) establish uniform requirements for audits of federal awards administered by non-federal entities; (c) promote the efficient and effective use of audit resources; (d) reduce burdens on state and local governments, Indian tribes, and nonprofit organizations; and (e) ensure that federal departments and agencies rely upon and use audit work done pursuant to the act; (3) the 1996 amendments were effective for audits of recipients' fiscal years ending June 30, 1997, and after; (4) the refinements cover a range of fundamental areas affecting the single audit process and single audit reporting, including provisions to: (a) extend the law to cover all recipients of federal financial assistance; (b) ensure a more cost-beneficial threshold for requiring single audits; (c) more broadly focus audit work on the programs that present the greatest financial risk to the federal government; (d) provide for timely and summary reporting of audit results; (e) promote better analyses of audit results through establishment of a federal clearinghouse and an automated database; and (f) authorize pilot projects to further streamline the audit process and make it more useful; (5) in June 1997, the Office of Management and Budget (OMB) issued Circular A-133, Audits of States, Local Governments, and Non-Profit Organizations; (6) the Circular establishes policies to guide implementation of the Single Audit Act 1996 amendments and provides an administrative foundation for uniform audit requirements for nonfederal entities that administer federal awards; (7) OMB also issued a revised OMB Circular A-133 Compliance Supplement; (8) the Compliance Supplement identifies for single auditors the key program requirements that federal agencies believe should be tested in a single audit and provides the audit objective and suggested audit procedures for testing those requirements; (9) GAO reported in its 1994 report that the Compliance Supplement had not kept pace with changes to program requirements, and had only been updated once since it was issued in 1985; (10) GAO recommended that the Compliance Supplement be updated at least every 2 years; and (11) OMB is now updating this supplement on a more regular basis. |
Federal banking regulators supervise the activities of banks and require the banks to take corrective action when the banks’ activities and overall performance present supervisory concerns or could result in financial losses to the DIF or violations of law or regulation. See table 1 for an overview of their functions. Federal banking regulators supervise the condition of most banks through off-site monitoring and on-site examinations. Regulators use off-site systems to monitor the financial condition of an individual bank; groups of banks with common products, portfolio or risk characteristics; and the banking system as a whole between on-site examinations. The off-site monitoring or surveillance activities rely on self-reported information from banks, filed through quarterly Reports of Condition and Income (Call Reports) to the banking regulators, supplemented with other market derived data, and in some cases, more detailed transaction level reporting on certain products or entities. The monitoring and surveillance activities help alert regulators to potentially problematic conditions arising in an individual bank, groups of banks with common products, portfolio or risks characteristics, and the banking system as a whole. Using these tools, each of the regulators identifies and flags banks with potential signs of financial distress and prepares lists or reports of such institutions (e.g., watch list, review list, high-risk profile list) requiring further follow up. These tools also help alert regulators to the need for other actions, such as a horizontal review of a group of banks, or broader policy guidance. To oversee large, complex banks, including bank holding companies, federal banking regulators conduct on-site supervision by stationing examiners at specific institutions. This practice allows examiners to continuously analyze information provided by the financial institution, such as board meeting minutes, institution risk reports or management information system reports, and for holding company supervisors’ supervisory reports to be provided to other regulators, among other things. This type of supervision allows for timely adjustments to the supervisory strategy of the examiners as conditions change within the institution. Bank examiners do not conduct an annual point-in-time examination of the institution. Rather, they conduct ongoing examination activities that target specific functional areas or business lines at the institutions based on their examination strategy, the institution’s risk profile, and the extent of supervisory concern during the supervisory cycle. Such activities are discussed with bank management throughout the year and incorporated into the final full-scope examination report issued at the end of the supervisory cycle. With respect to other individual banks, examiners use Call Report data to remotely assess the financial condition of banks and thrifts and plan the scope of on-site examinations. As part of on-site examinations, regulators also closely assess banks’ exposure to risk and assign ratings, under the CAMELS rating system. The ratings reflect a bank’s condition in six areas: capital, asset quality, management, earnings, liquidity, and sensitivity to market risk. Evaluations of CAMELS components consider the institution’s size and sophistication, the nature and complexity of its activities, and its risk profile. Each component is rated on a scale of 1 to 5, with 1 being the best and 5 the worst. The component ratings are then used to develop a composite rating, also ranging from 1 to 5. Banks with composite ratings of 1 or 2 are considered to be in satisfactory condition, while banks with composite ratings of 3, 4, or 5 exhibit varying levels of safety and soundness concerns. Banks with composite ratings of 4 or 5 are included on FDIC’s problem bank list, which designates banks with weaknesses that threaten their continued financial viability. The regulators supplement the CAMELS rating system with other risk assessment methodologies and frameworks. For example, OCC uses a Risk Assessment System that characterizes the level of risk, quality of risk management, or aggregate and direction of risk across eight risk categories. Also as part of the examination and general supervision process, regulators may direct a bank to address issues or deficiencies within specified time frames. When regulators determine that a bank’s or thrift’s condition is less than satisfactory, they may take a variety of supervisory actions, including informal and formal enforcement actions, to address identified deficiencies. Regulators have some discretion in deciding which actions to take, but typically take progressively stricter actions against more serious weaknesses. Informal actions generally are used to address less severe deficiencies or when the regulator has confidence the bank can and will make changes. Informal actions include supervisory letters detailing specific remedial measures for the bank to implement, safety and soundness plans, resolutions adopted by the bank’s board of directors at the request of its regulator, individual minimum capital ratio letters, and memorandums of understanding or agreements between the regulator and the bank’s board of directors. Informal actions are not public agreements (regulators do not make them public through their websites or other channels) and are not enforceable by sanctions. The regulators use formal actions to address more severe deficiencies. Formal enforcement actions include PCA directives, safety and soundness orders, cease and desist orders, removal and prohibition orders, civil money penalties, formal agreements, and termination of a bank’s deposit insurance. Regulators publicly disclose formal enforcement actions. A number of factors contributed to the severe crisis experienced by the thrift industry in the 1980s. Thrifts were regulated by the Federal Home Loan Bank Board (FHLBB) and insured by FSLIC within a legislative framework separate from the one that surrounded commercial banks. At the time, thrifts were largely restricted to making long-term, fixed-rate home mortgage loans. Because they issued short-term deposits to fund their long-term mortgage assets, thrifts were exposed to interest rate risk. When inflation resulted in rising interest rates in the mid-1970s and early 1980s, thrifts were unable to respond, losing many depositors to competitors such as money market funds because regulations prevented them from raising the interest they could pay on deposits. Inflation diminished the value of the long-term, fixed-rate mortgages they held, and virtually wiped out all the industry’s net worth, driving many institutions into insolvency. Because the assets of FSLIC were inadequate to close all insolvent thrifts, FHLBB forestalled actual insolvency in the early 1980s by reducing capital standards and allowing the use of alternative accounting procedures to increase reported capital levels. At the same time, Congress deregulated thrifts with measures that included phasing out deposit interest-rate ceilings, broadening the lending and investment powers of thrifts, and more than doubling the limit of federal deposit insurance per thrift account holder. A 1997 FDIC study reviewing the 1980s crises in the thrift and banking industries found that as a result of these regulatory and legislative actions, the thrift industry grew rapidly, funded by an influx of deposits—often higher-risk brokered deposits.Loan portfolios at thrifts shifted from home mortgage financing into commercial real estate (CRE) loans—particularly into higher-risk acquisition, development, and construction (ADC) loans in areas of the country experiencing a real estate boom. The profitability of many of these activities depended heavily on continued inflation in real estate values. Tax legislation passed in 1981 further stimulated demand for CRE loans by increasing the rate of return. Our June 1989 report on failed thrifts found that in many cases, diversification was accompanied by inadequate internal controls and noncompliance with laws and regulations; thus, the risk of these activities was further increased. Consequently, many institutions experienced substantial losses on loans and investments, a condition that was made worse by an economic downturn in the later 1980s and by the repeal of the CRE tax incentives in 1986. The competitive environment for the banking industry became increasingly demanding in the 1980s. As with thrifts, the development of money market funds and the deregulation of deposit interest rates, which removed the cap on the maximum amount of interest banks and thrifts were allowed to pay on deposits, spurred a competition to attract depositors with higher interest rates. This competition resulted in further squeezing what the banks could earn net of what they had to pay to acquire the deposits. Further, competition increased in the banking industry not only from within, but also from thrifts, foreign banks, and credit markets such as the commercial paper and bond markets. The 1997 FDIC study noted that a series of regional and sectoral recessions had a severe impact on local banks and led to many bank failures, especially in areas that had been preceded by rapid regional expansions; that is, boom-and-bust patterns of economic activity. The magnitude of the banks’ losses was compounded because many banks active in these areas assumed excessive risks, with the result that they failed in disproportionate numbers. For example, many banks greatly increased their exposure to CRE as demand surged during the 1980s, particularly as deregulation, tax incentives, and other factors created an environment in which CRE lending became lucrative. To boost profits, some large banks assumed additional risk by, for example, increasing their off-balance-sheet activities. The 1997 FDIC study identified four major regional and sectoral economic recessions that were associated with widespread bank failures during the 1980-1994 period. The first recession was related to a downturn in farmland prices in the early and middle 1980s and led to a number of failures of banks with heavy concentrations of agricultural loans, particularly in the Midwest. The second recession occurred in Texas and other oil producing southwestern states after oil prices began dropping in 1981. While initial bank failures in this region were primarily due to problems with energy-related loans, substantial losses on CRE and residential real estate loans were responsible for the rising number of bank failures in this region in the second half of the decade. The third and fourth recessions occurred in the northeastern United States and in California at the end of the 1980s, largely due to a sharp decline in real estate prices that resulted from an oversupply of CRE and residential real estate in these areas and led to defaulted real estate loans and bank failures. In January 2010 testimony, the former FDIC Chairman commented that a number of the products and practices that led to the 2007-2009 financial crisis had their roots in mortgage market innovations that began in the 1980s.residential mortgage investments that precipitated the thrift crisis of the 1980s, banks and thrifts began selling a major share of their mortgage loans for securitization. The housing government-sponsored enterprises (GSEs) create a market for investors to purchase securities backed by loans originated by banks and thrifts. Through the 1990s, the GSEs increased in size as they purchased and retained the mortgage-backed securities (MBS) they issued. She noted that following the large interest rate-losses from As interest rates declined in the early 2000s, mortgage originations surged, driven primarily by the refinancing of existing mortgages as borrowers sought to lower the interest rates on their home loans and as home price appreciation in the United States began accelerating rapidly in 2000. This wave of refinancing activity was originally dominated by prime, fixed-rate loans. However, declining affordability in high-priced housing markets as well as increased competition by mortgage originators for loan volume contributed to a shift towards nontraditional mortgage products, which allowed borrowers to defer repayment of principal or part of the interest for the first few years of the mortgage. Growth in the subprime market also increased. Many borrowers eventually faced large payment Many providers of these increases and had difficulty making payments. products—mortgage brokers, mortgage bankers, and mortgage affiliates of bank, thrift, and other financial holding companies—operated outside the traditional thrift and bank regulatory system. We reported on the risks of nontraditional mortgage products to borrowers and lenders, the extent to which mortgage disclosures discussed the risks to borrowers, and federal and selected state regulatory responses to nontraditional mortgage product risks in 2006. See GAO, Alternative Mortgage Products: Impact on Defaults Remain Unclear, but Disclosure of Risks to Borrowers Could be Improved, GAO-06-1021 (Washington D.C.: Sept.19, 2006). or quality standards. Private-label MBS backed by lower-quality mortgage pools left investors exposed to greater risk of default. The market share of private-label MBS, which typically pool jumbo and nonprime mortgages, grew rapidly from 2004 to 2006. During this time, the market share of the GSEs, which pool eligible prime mortgages, decreased. Other investment structures such as collateralized debt obligations (CDO) were also instrumental to creating demand for these riskier, lower quality loans. In a basic CDO, a group of loans or debt securities are pooled and securities are then issued in different tranches that vary in risk and return depending on how the underlying cash flows produced by the pooled assets are allocated. If some of the underlying assets defaulted, the more junior tranches—and thus riskier ones—would absorb these losses first before the more senior, less-risky tranches. Purchasers of these CDO securities included insurance companies, mutual funds, commercial and investment banks, and pension funds. Many of these CDOs largely consisted of mortgage-backed securities, including subprime mortgage-backed securities. The growth of the mortgage-linked derivatives market further allowed investors to take on exposure to the subprime and Alt-A markets without actually owning the mortgages or the MBS or CDO on the entities that owned the mortgages. Through the use of such credit derivatives, investor exposure to losses in these markets was multiplied and became many times larger than the exposures generated by the mortgages alone. The dramatic decline in the U.S. housing market that began in 2006 precipitated a decline in 2007 in the price of mortgage-related assets, particularly mortgage assets based on nonprime loans. Some financial institutions were so exposed that they were threatened with failure, and some failed because they were unable to raise capital or sell assets to generate liquidity as the value of their portfolios declined. Other institutions, ranging from the GSEs to large securities firms, were left holding “toxic” mortgages or mortgage-related assets that became increasingly difficult to value, were illiquid, and potentially had little worth. Moreover, investors not only stopped buying private-label MBS but also became reluctant to buy securities backed by other types of assets. Because of uncertainty about the liquidity and solvency of financial entities, particularly among large, financially interconnected firms, the prices banks charged each other for borrowing funds rose dramatically, and interbank lending conditions deteriorated sharply. The resulting liquidity and credit shortage made the financing on which businesses and individuals depend increasingly difficult to obtain. By the late summer of 2008, the ramifications of the financial crisis ranged from the continued failure of financial institutions to increased losses of individual wealth, reduced corporate investments, and further tightening of credit that would exacerbate the emerging global economic slowdown. Bank failures associated with the financial crisis were concentrated in areas where the housing markets experienced strong growth. In response to the demand for housing stock in the years prior to the crisis, residential development activity increased. Many banks exhibited rapid growth in their ADC portfolios, resulting in significant concentrations in ADC and CRE loans. Strong competition for higher yielding assets contributed to a decline in underwriting standards. Our prior work found that losses on higher-risk residential mortgages drove the failure of large banks (those Failures of the small with more than $10 billion in assets) in these areas.and medium banks (those with less than $1 billion in assets, and between $1 billion and $10 billion in assets, respectively) in these areas were largely driven by losses on CRE and ADC loans. Early intervention is a key lesson learned for successfully resolving the problems of troubled institutions. In the 1980s thrift and banking crises and the 2007-2009 financial crisis, regulators could have provided earlier and more forceful supervisory attention to troubled institutions. In addition, the crises revealed limitations in regulatory tools for identifying and addressing emerging risks. The 2007-2009 financial crisis also highlighted the need for federal banking regulators to consider the impact of emerging risks in the broader financial system on individual banks. Although the relative causes, scope, and duration of the 1980s thrift and commercial bank crises and the 2007-2009 financial crisis were distinct, our past reviews of the banks that failed during these crises found similar contributing factors, particularly weak management practices that involved banks engaged in higher-risk activities. Although regulators often identified these risky practices early on in each crisis, the regulatory process was not always effective in correcting the underlying problems before the banks became undercapitalized and failed. For example, in our June 1989 report, examiners for 26 failed thrifts cited management weaknesses as a leading factor in the failures. In virtually all of these cases, the thrifts shifted their focus from traditional home mortgage lending to higher-risk activities. Moreover, management at these thrifts often pursued business decisions and strategies that increased their risks, such as a heavy reliance on brokered deposits to fund rapid growth, poor underwriting and credit administration practices, and concentrations in ADC lending. These management problems consequently made them more vulnerable to poor regional economic conditions. We found that thrift management was often unresponsive to supervisory concerns the examiners raised in these cases and that thrift management did not always act on problems examiners identified or implement promised corrective actions. In our April1989 testimony, we analyzed the supervisory history of an additional 47 thrifts that were near failing. For more than half, no formal enforcement actions were taken and many had no history of formal actions. Where enforcement actions were taken, they were often not effective in correcting problems. Further, the length of time that elapsed between identification of a need for formal action and implementation of the action was often unduly lengthy. Similarly, in our April 1991 report, examiners of 72 troubled banks identified similar management weaknesses as the most common reason for assets and earnings problems, including heavy concentrations in specific types of assets, industries, or local economies, and excessive growth combined with poor lending practices or controls. frequently cited asset problems involved problem real estate loans and the most frequently cited reasons for the asset problems involved lax underwriting practices. Losses on these problem assets resulted in earnings problems and eventually capital problems for the banks. In about half of the 72 failed banks, we concluded the banking regulators should have been more aggressive and used stronger measures than they did (e.g., some formal enforcement action instead of only an informal enforcement action).underlying causes for problems were known but remained uncorrected or the bank had a history of noncompliance with enforcement actions or of violating banking regulations. We also found that better outcomes were associated with the most forceful actions taken, and worse outcomes were associated with not taking the most forceful action available. In 1991, we reported on a random sample of 72 banks that as of January 1, 1988, regulators identified as having difficulty meeting minimal capital standards. GAO, Bank Supervision: Prompt and Forceful Regulatory Actions Needed, GAO/GGD-91-69 (Washington D.C.: Apr. 15, 1991). choosing among enforcement actions of varying severity, they preferred to work with bank management to resolve problems during the 1980s thrift and commercial bank crises over taking enforcement actions. For example, we identified 37 cases from our sample of 72 banks where regulators decided not to use available enforcement actions. In 26 cases, the unsafe and unsound practices that caused the capital depletion remained uncorrected. FDIC’s 1997 study noted the ability of regulators to curb excessive risk taking on the part of healthy banks was limited by the problem of identifying risky activities before they produced serious losses. The study found that bank regulators were reasonably successful in curbing risk- taking on the part of officially designated problem banks. However, in dealing with ostensibly healthy banks, regulators had difficulty restricting risky behavior while the banks were still solvent and the risky behavior was widely practiced and profitable. The study found it was challenging for regulators to distinguish such behavior from acceptable risk/return trade-offs, innovation, and other appropriate activity, or to modify the behavior of banks while they were still apparently healthy. We concluded in our 1991 report outlining our strategy for reforming the deposit insurance system in the wake of the thrift and commercial banks crises that meaningful reform would not succeed without an enforcement process that was less discretionary than the approach used at the time. GAO, Deposit Insurance: A Strategy for Reform, GAO/GGD-91-26 (Washington, D.C.: Apr. 15, 1991). we proposed that the first tripwire address unsafe activities that indicate management inadequacies that could lead to further financial problems; that is, unsafe practices in seemingly healthy institutions. We proposed that a second tripwire address poor asset quality and earnings, as our prior work showed that serious asset deterioration and earnings problems are leading indicators of bank financial problems. Our third and fourth tripwires addressed capital deterioration. Subsequently, Congress established the PCA framework in 1991.The framework is set forth in sections 38 and 39 of the Federal Deposit Insurance Act, as amended by FDICIA. Section 38 requires regulators to classify banks into one of five capital categories and take increasingly severe actions as a bank’s capital deteriorates. Section 39 requires the banking regulators to prescribe safety and soundness standards related to noncapital criteria, including operations and management; compensation; and asset quality, earnings, and stock valuation. Section 39 was intended to allow regulators to take action against seemingly healthy banks that were engaging in risky practices before losses occurred. Initially, the standards for asset quality and earnings were to be quantitative and intended to increase the likelihood that regulators would address safety and soundness problems before capital deteriorated. However, later legislative changes gave regulators considerable flexibility to implement these standards, and regulators determined instead to issue guidance in 1995 setting out broad standards addressing these areas. Section 39 allows the regulators to take action for non-problem institutions in which inadequate practices and policies could result in a material loss to the institution or in cases where management has not responded effectively to prior criticisms. Despite this new regulatory framework, regulators continued to face challenges in restricting risky bank behavior in the years leading up to the 2007-2009 financial crisis. As we will discuss later, PCA was not effective in resolving underlying problems at failed banks and preventing widespread losses to the deposit insurance fund during the financial crisis. Our more recent work and that of the federal banking regulator IGs found that many of the banks that failed during the financial crisis were susceptible to the same risks that gave rise to the bank failures of the 1980s and 1990s. For example, in our January 2013 report, we found management weaknesses also contributed to many failures, including poor underwriting and credit administration practices, rapid growth funded by brokered deposits, and high concentrations—in particular, high CRE and ADC concentrations for small and medium-sized banks and high concentrations of higher risk residential mortgage products at large banks. With the downturn in the housing market and the onset of the financial crisis, asset problems manifested. The rising level of nonperforming loans, particularly ADC loans, was a key factor driving a decline in capital for many failed banks. As another example, an April 2013 report by the Treasury IG noted that many of the OCC-supervised banks that failed from 2008 to 2012 evidenced weaknesses with bank boards of directors or management and high concentrations in CRE loans. The Federal Reserve IG also found similar factors in its review of failed banks supervised by the Federal Reserve, in particular, that many bank failures involved the board and management making strategic decisions to pursue aggressive growth that increased the bank’s risk profile and ultimately contributed to the failure. And, in its 2010 report, the FDIC IG found that risky bank behaviors associated with bank failures included pursuit of aggressive growth in CRE and ADC loans, excessive levels of asset concentration with little risk mitigation, and inadequate loan underwriting. We and the federal banking regulator IGs also found that regulators had identified underlying risks of banks that failed during the 2007-2009 financial crisis well before their failure, but did not always take timely supervisory action. For example, of the 136 failed banks we reviewed for our 2011 PCA report, we found that most had received an informal or formal enforcement action before undergoing the PCA process, although the timeliness of enforcement actions was inconsistent. Specifically, among 60 banks that failed between January 2008 and June 2009, approximately 28 percent did not have an initial informal or formal non- PCA enforcement action until 90 days or less before bank failure. Further, 50 percent of these failed banks did not have an enforcement action until 180 days or less prior to failure. After June 2009, these percentages improved, with approximately 8 percent not having an enforcement action until 90 days or less before failure, and approximately 22 percent not having an action until 180 days or less before failure. Similarly, a September 2011 report by the Federal Reserve IG analyzing the failure of 20 state member banks noted that examiners identified key safety and soundness risks but did not take sufficient supervisory action in a timely manner to compel the boards and management to mitigate those risks. In many instances, the IG found the examiners eventually concluded that a supervisory action was necessary, but that conclusion came too late to reverse the bank’s deteriorating condition. Further, a December 2010 report by the FDIC IG found that in many cases, examiners identified significant risks but did not take timely and effective action to address those risks until the bank had started to experience significant financial deterioration in the loan or investment portfolios. Staff from one regulator told us that when they have a bank failure, they always look back at that failure and assess what they could have done differently in terms of supervision. They found that generally, examiners had identified the underlying issues that eventually led to the failures but did not press management hard enough to deal with those issues. These staff explained that it can be difficult for examiners to make the case to bank management that they need to ratchet down a profitable line of business because at the time the examiners see risk building up, the bank’s performance may not yet have been impacted. These staff also said that if the agency decides to take an enforcement action when the bank is still in good financial shape, and the bank refuses to sign it, a lengthy and resource-intensive legal process could ensue. Staff from another regulator acknowledged that examiners had often uncovered problems at the banks long before they failed, yet bank management did not take action to address their recommendations. These staff noted that part of the role of the examiner is to be skeptical, and it is difficult to be skeptical when loans are paying as agreed. These staff recognized that in the past they have not always been effective in getting bank management to take action to address potential problems before their effect hits the balance sheet. Banking regulators also received considerable feedback in response to proposed actions to address emerging risks that resulted in delays. The regulators issued draft guidance in January 2006 on CRE concentrations and risk management, based partly on the trends they observed in CRE concentrations and risks, but the guidance was not finalized until December 2006. Staff from one regulator told us the guidance was issued too late to allow for corrective actions to be taken across the banking system before the crisis ensued. The draft guidance elicited about 4,400 comments letters from bankers, industry trade groups, state financial regulatory agencies, appraisers and real estate industry representatives. The vast majority of the commenters expressed strong resistance to the proposed guidance, and the staff told us that working through the comment process resulted in delays to final issuance. In its September 2011 report summarizing state member bank failures, the Federal Reserve IG reported that examiners they spoke with perceived the guidance to be “too little, too late” and that examiners mentioned that many institutions did not quickly adopt the risk management practices outlined in the guidance prior to the onset of the financial crisis. As discussed earlier, in the aftermath of the thrift and commercial bank crises regulators were criticized for failing to take timely and forceful action to address the causes of thrift and bank failures and prevent losses to taxpayers and the deposit insurance fund. The PCA framework was intended to improve regulators’ ability to identify and promptly address deficiencies at banks by, in part, limiting their discretion and mandating them to take corrective actions under certain circumstances. Staff from one regulator told us that PCA likely prompts bank management to address problems earlier than is the case without PCA and that failure costs are likely lower with PCA than without it. However, the PCA framework did not prevent widespread losses to the deposit insurance fund—a key goal of PCA. In June 2011, we reported on the effectiveness of the PCA framework for addressing financial deterioration of banks during the financial crisis and concluded that PCA’s reliance on capital triggers limited its ability to promptly address bank problems. Before 2007, PCA was largely untested by a financial crisis that resulted in a large number of bank failures. After the passage of FDICIA, sustained growth in the U.S. economy meant that the financial condition of banks was generally strong. For instance, as a result of positive economic conditions, the number of bank failures declined from 180 in 1992 to 4 in 2004. Furthermore, from June 2004 through January 2007, no banks failed. regulator IGs found that regulators, with the exception of OCC, made limited use of their section 39 authorities, consistent with our prior findings. As part of our June 2011 work, we tested financial indicators other than capital and found that there were important predictors of future bank failure that could be used in developing non-capital triggers for PCA. For example, indicators of earnings, liquidity, asset quality, and sector loan concentration contain information about the condition of the bank that can To provide warning of bank distress up to 1 to 2 years in advance. improve the effectiveness of the PCA framework, we recommended, among other things, that the banking regulators consider additional, non- capital triggers that would require early and forceful regulatory actions In written comments, FDIC, tied to specific unsafe banking practices.the Federal Reserve, and OCC agreed with our recommendation to consider options to make PCA more effective. As of June 2015, federal banking regulators were still considering the pros and cons of modifying the PCA framework, such as the use of additional noncapital-based triggers. For instance, FDIC staff noted that non-capital triggers could strengthen the supervisory process and help banks avoid mistakes leading to crisis, address GAO and FDIC IG recommendations, and involve low implementation costs since the infrastructure is already in place. However, FDIC staff said that the additional hard-wired PCA triggers, which would likely require interagency rulemaking, could encourage banks to operate just below a given threshold to avoid scrutiny, and banks tripping PCA non-capital triggers could be perceived in the capital markets as being on a path toward regulatory intervention. In addition, FDIC staff noted that while additional tripwires would result in greater stringency of supervision, there could also be unintended consequences resulting in constraints on well-managed banks performing their financial intermediation function. CAMELS ratings have not always reflected long-term risk factors, particularly with respect to poor management practices. The CAMELS rating system contains explicit language in each of the components emphasizing the importance of management’s ability to identify, measure, monitor, and control risks. For example, a poor management component rating (M) may indicate that the bank suffers from weak internal controls or management information systems or other deficiencies that could Thus, threaten the safe, sound, and efficient operation of the bank.deterioration of the management component may yield future information about risk. However, in prior crises, regulators did not always assign management component ratings that were reflective of weaknesses in management, and staff from one regulator said that there was a tendency to use the rating more as a point in-time snapshot of a bank’s condition, rather than a reflection of long-term risk factors that may cause losses several years later. In its 1997 study, FDIC analyzed the management component ratings for the 1,564 banks that failed between 1980 and 1994 (excluding banks that received FDIC assistance) during the commercial bank crisis. The results showed that 2 years before failure, in only 6 percent of the cases was the management rating one full number worse than the average of the other four components. The FDIC IG noted in a 2010 report that examiners did not always place sufficient emphasis on risk mitigation when assigning ratings to banks that later failed. The IG noted that bank management’s lack of responsiveness to examiners’ concerns was not always reflected in assigned CAMELS ratings until significant financial deterioration occurred. In its 2011 report, the Federal Reserve IG said its work highlighted the need for supervisors to ensure that CAMELS composite and component ratings are consistent with narrative examination comments to clearly convey the need for urgent action when appropriate. Staff from one regulator told us that although the management component of the CAMELS is stand-alone, in some instances, examiners found it difficult to rate management low (i.e., 4 or 5) if capital and earnings were strong, even if they had noted concerns with management practices. As a result, in some cases, composite CAMELS ratings remained relatively high (i.e., 1 or 2) until capital and earnings began to decline. Because capital and earnings tend to be lagging indicators, such ratings decreases were not reflected in some cases until before the bank failed. In our 2011 PCA report, we found that most banks that failed degraded from a CAMELS composite rating of 2 to a 4 in one quarter, though they generally had at least one component rating of a 3 prior to failure. The financial crisis also highlighted the need for regulators to consider the impact of risks in the broader financial system on individual banks. Before the 2007-2009 financial crisis, banking supervision was microprudential; that is, generally focused on the activities of individual institutions or groups of insitutions. Staff from two federal banking regulators underscored that financial stability requires looking beyond the safety and soundness of individual banks to across the financial system with a macroprudential approach that focuses on assessing systemic risks. Staff from one regulator said that the problem with focusing solely on the activities of individual banks could be seen in the “originate to distribute” model that banks used to originate mortgages in the years leading to the financial crisis. That is, banks were originating mortgages with the intent to sell them in the secondary market as mortgage-backed securities, and not keep them in portfolio as held-for-investment. These staff also said that although the underwriting risk of these mortgages was significant, they believed that there was little risk to the bank’s capital because the bank was making fees but was not retaining the credit risk of the mortgages. These staff said they incorrectly assumed that investors were paying attention to the underwriting risk embedded in the securitized mortgages, because investors were buying the securities and not putting pressure on the banks to increase their underwriting standards for the underlying mortgages. When the real estate bubble burst and homeowners began to default on their mortgages, these investors suffered heavy losses. As a result, staff said they learned that financial stability oversight requires a different perspective and a different, more global approach that considers, among other things, the interconnectedness of financial institutions and their activities. In retrospect, staff noted that stronger bank capital standards—notably those relating to the quality of capital and the amount of capital required for banks’ trading book assets—and more attention to the liquidity risks faced by the largest, most interconnected firms would have made the financial system as a whole more resilient. Although the activities of large, interconnected financial institutions often crossed traditional sector boundaries, banking regulators did not always have sufficient tools and capabilities to adequately oversee the risks that these financial institutions pose to themselves and other institutions. In June 2008 testimony, a former Federal Reserve Vice Chairman noted that under the current U.S. regulatory structure, challenges can arise in assessing risk profiles of large, complex financial institutions operating across financial sectors, particularly given the growth in the use of sophisticated financial products that can generate risks across various legal entities. He also said that the financial crisis highlighted the importance of enterprise wide risk management, particularly that supervisors need to understand risks across a consolidated entity and assess the risk management tools being applied across the financial institutions. For example, the former Federal Reserve Chairman said that stress tests of the 19 largest bank holding companies, conducted by federal banking regulators in 2009 as part of the Supervisory Capital Assessment Program, demonstrated that many of these institutions’ information systems could not provide timely, accurate information about bank exposures to counterparties or complete information about the aggregate risks posed by different positions and portfolios. Staff from another regulator said that fragmented databases and otherwise insufficient processes at large banks to identify similar risks within and across various lines of businesses and legal entities both on and off the balance sheet resulted in the failure to identify and therefore measure, monitor, and control exposure to concentrations. Further, accounting rules in effect at the time permitted special-purpose entities— legal entities often used by banks to facilitate the securitization of real- estate loans—to remain off the banks’ balance sheets, thus obfuscating regulators’ ability to fully understand the extent of the banks’ business activities and risk exposures. Our own work had raised concerns over the adequacy of supervision of large financial conglomerates. For example, one of the large entities that OTS oversaw was the insurance conglomerate American International Group, Inc. (AIG), which was subject to a government takeover necessitated by financial difficulties the firm experienced as the result of OTC derivatives activities related to mortgages. In a March 2007 report, we expressed concerns over the appropriateness of having OTS oversee diverse global financial institutions given the size of the agency relative to the institutions for which it was responsible. Staff from one regulator said that another lesson learned was that an enormous amount of systemic risk had been concentrated in the shadow banking system before the onset of the 2007-2009 financial crisis in several nonbank financial firms, such as large investment firms. However, the regulators did not perceive the buildup of risk and leverage across the financial system because of a gap in the regulation of the shadow banking system. Staff said that the increase in system-wide leverage during the years leading up to the financial crisis distinguished the impact that real estate problems of the 1980s had on thrifts and commercial banks from the impact that real estate problems of the 2000s had on the banking sector and larger financial system. That is, losses from real estate-related loans, while primary factors in the failures of banks and thrifts during the 1980s, did not have a systemic impact on the larger financial system because these institutions had originated the loans and retained the associated credit risk. In contrast, they said that losses from real estate-related loans during the 2007-2009 financial crisis had a systemic impact because the risks associated with these loans were spread and amplified throughout the financial system. Contributing to the buildup of risk and leverage across the financial system was the fact that shadow banking activities were, for the most part, not subject to consistent and effective regulatory oversight. The former Federal Reserve Chairman noted that much shadow banking, including various special-purpose entities and many nonbank mortgage- origination companies, lacked meaningful prudential regulation. In our January 2009 report, we noted that the role of nonbank lenders in the recent financial collapse provided an example of a gap in our financial regulatory system resulting from the activities of institutions that were generally subject to little or no direct oversight by federal regulators. The significant participation by these nonbank lenders in the subprime mortgage market—which targeted products with riskier features to borrowers with limited or poor credit history—contributed to a dramatic loosening of underwriting standards leading up to the crisis. Staff from one regulator noted that, at the large investment firms, broker-dealers arranged for investors to fund these long-term mortgage assets with short-term financial instruments, typically with original maturities of less than nine months, which allowed leverage in the whole financial system to build to unprecedented levels and distribute risk throughout the system. However, some of the top investment banks were subject to voluntary and limited oversight at the holding company level—the level of the institution that generally managed its overall risks. holding companies faced serious losses and funding problems during the crisis, and their instability severely damaged the financial system. The financial crisis demonstrated that the failure of large interconnected financial institutions, such as the failure of Lehman Brothers Holdings, Inc. in the fall of 2008, could trigger systemic events through a rise in the price of risk (that is, the risk-adjusted return on investments) and deleveraging in the broader financial system. The Securities and Exchange Commission terminated its program for overseeing these large broker-dealer holding companies in September 2008 but continues to oversee these firms’ registered broker-dealer subsidiaries. Federal banking regulators have taken steps to incorporate the lessons learned from the 2007-2009 financial crisis and improve their ability to identify and respond to emerging risks. First, regulators told us that they recognize bank supervision needs to be less historically focused and more forward-looking. As such, they have been working to include more forward-looking elements into examinations, such as bank-performed stress test processes and results, and to reflect such forward-looking information in the CAMELS ratings and other risk assessment tools. Second, to improve their ability to respond earlier and more forcefully to banks’ risky behavior, the three regulators have initiated more granular tracking of supervisory issues that surface during examinations, referred to as matters requiring attention (MRA). Third, through their participation in FSOC and their own surveillance activities, they also have been monitoring the financial system more broadly for risks that could affect their regulated institutions. We and others have begun to review some of these regulatory initiatives, but further work is needed to fully evaluate their effectiveness in improving regulators’ ability to identify and respond to emerging risks in a timely manner. Federal Reserve, FDIC, and OCC staff have been using banks’ stress tests as a way to incorporate forward-looking elements into the examiners’ considerations of risk in individual institutions. Stress testing is a forward-looking, quantitative evaluation of the potential effects of stress scenarios that could impact a banking institution’s financial condition and capital adequacy. These risk assessments are based on assumptions about potential adverse external events, such as changes in real estate or capital markets prices, or unanticipated deterioration in a borrower’s repayment capacity. In supervisory guidance for stress testing practices of large banks issued in May 2012, the regulators noted that the financial crisis underscored the need for banks to incorporate stress testing into their risk-management practices and demonstrated that banking organizations unprepared for particularly adverse events and circumstances can suffer acute threats to their financial condition and viability. Section 165(i) of the Dodd-Frank Act requires two types of stress tests on large banks. Section 165(i)(1) requires the Federal Reserve to conduct annual stress tests of bank holding companies with $50 billion or more in total consolidated assets and nonbank financial companies supervised by the Federal Reserve, while Section 165(i)(2) requires companies with more than $10 billion in total consolidated assets to conduct annual stress tests themselves, in addition to requiring companies with $50 billion or more in total consolidated assets and nonbank financial companies supervised by the Federal Reserve to conduct their own stress tests semi-annually. In October 2012, the Federal Reserve issued final rules for the tests of holding companies with $50 billion or more in total consolidated assets and also required the companies to conduct and disclose annual company-run stress tests. Also in October 2012, FDIC, OCC, and the Federal Reserve issued final rules requiring annual company-run stress tests for bank holding companies with total consolidated assets between $10 billion and $50 billion and for national banks and state member banks, state nonmember banks, state and federal thrifts and thrift holding companies with $10 billion or more in total consolidated assets. The results of the stress tests provide the regulators with more forward-looking information that they plan to use in bank supervision and to assist them in assessing the company’s risk profile and capital adequacy. In March 2014, FDIC, OCC, and the Federal Reserve issued final guidance describing supervisory expectations for stress tests conducted by financial companies with total consolidated assets between $10 billion and $50 billion. Banks with less than $10 billion in assets are not required or expected to conduct the types of stress testing specifically articulated in the initiatives which are directed at larger organizations. However, the three regulators continue to emphasize that all banks, regardless of size, should have the capacity to analyze the potential impact of adverse outcomes on their financial condition. Banks in this range also remain subject to the stress testing guidance contained in prior interagency issuances. OCC issued guidance specifically designed for community banks on how they can effectively use simple stress testing concepts and methods to help identify and quantify risk in loan portfolios and help establish effective strategic and capital planning processes. For example, OCC staff said that their examiners and economists developed stress testing tools for analyzing commercial real estate, agriculture, and other loan portfolios that are also available to community banks. And, FDIC published an article illustrating approaches to assist community banks with credit stress testing in an edition of Supervisory Insights. The former Federal Reserve Chairman said that one of the most important aspects of regular stress testing is that it forces banks and their supervisors to develop the capacity to quickly and accurately assess the enterprise-wide exposures of their institutions to diverse risks, and to use that information routinely to help ensure that they maintain adequate capital and liquidity. risk-management capacity is itself critical for protecting individual banks and the banking system. Federal Reserve staff also noted that the stress test is the best way to communicate to bank management that risks have built up and need attention, because it is data driven. Without such data, they said it is difficult to make a convincing case to management because bank managers do not want to hear that they should act more cautiously when their banks are profitable. Ben Bernanke, Chairman of the Board of Governors of the Federal Reserve System, Stress Testing Banks: What Have We Learned? Conference on “Maintaining Financial Stability: Holding a Tiger by the Tail”, Presented by the Federal Reserve Bank of Atlanta (Stone Mountain: Apr. 8, 2013). As a complement to stress testing, federal banking regulators have also emphasized the importance of forward-looking capital planning to evaluate and assess a bank’s capital needs relative to its current and planned business strategies. For example, OCC issued guidance that discusses the OCC’s processes for evaluating a bank’s capital planning and the various actions OCC may take to ensure a bank’s process and capital levels remain adequate for its complexity and overall risks. As another example, the Federal Reserve issued guidance describing its expectations for internal capital planning at the large, complex bank holding companies subject to its capital plan rule. As part of their efforts to engage in more forward-looking supervision, federal banking regulators have been directing examiners to use the management component of the CAMELS ratings to reflect underlying risks, and have also been focusing on other ways to build more forward- looking risk-based elements into the CAMELS ratings. FDIC officials said they have been trying to look at underlying risks in a forward-looking fashion rather than relying on absolute earnings, problem assets, and delinquencies. In June 2009, FDIC’s Division of Supervision and Consumer Protection announced the “Forward- Looking Supervision” approach, which was delivered as a training program and reinforced in subsequent guidance. The training emphasized a forward-looking approach to examination analysis and ratings based on the lessons learned that were identified in the material loss reviews for those FDIC-regulated banks that failed during the financial crisis. In an audit that reviewed the training, the FDIC IG noted that it directed examiners to consider bank management practices as well as current and prospective financial performance and conditions or trends when assigning CAMELS ratings. FDIC dedicated an issue of Supervisory Insights to discussing interest rates risk and issued examiner guidance about addressing risk management deficiencies surrounding interest rate risk early. In September 2011, OCC issued a supervisory memorandum to examiners, drawing upon lessons learned from the financial crisis. The guidance was intended to enhance examiners’ use and communication of the Risk Assessment System (RAS). Examiners use RAS to identify, communicate, and affect appropriate responses to the buildup of risks or deficiencies in risk-management systems at OCC-supervised institutions. The memorandum stated that examiners should use RAS in conjunction with CAMELS ratings to identify current and prospective risks and that the RAS assessments should help inform CAMELS ratings. The memorandum noted that the CAMELS management component rating too often reflected banks’ cooperation and commitment to correct deficiencies without demonstrated performance. It stated that the management component rating should focus on actions and results, rather than commitments. Finally, the memorandum stressed that assigning an adverse rating to the management component based on poor or missing practices, before problems were evident in a bank’s financial condition, was one of the tenets of sound and forward-looking supervision and an important lesson learned from the recent financial crisis. More recently, in response to recommendations included in a 2013 international peer review report, OCC staff said they had formed a working group to determine what additional changes may be needed to enhance the application of CAMELS and its integration with OCC’s RAS to ensure that examiners use the RAS and CAMELS to identify, assess, and document current and emerging risks. Federal Reserve staff said that they are in the process of updating prior guidance to examiners for evaluating the adequacy of banks’ risk management processes. In 1995, the Federal Reserve issued guidance directing examiners to assign separate supervisory ratings for banks’ risk management practices, including internal controls, and to give this rating significant weight when determining the rating of management under CAMELS. Federal Reserve staff said that they tracked CAMELS downgrades both before and during the crisis and believe the guidance was instrumental in helping examiners identify and rate poor management practices. They said they are reviewing the guidance to incorporate lessons learned from the crisis and update where appropriate. MRA describes bank practices that deviate from sound governance, internal control, and risk management principles, and have the potential to adversely affect the bank’s condition, including its financial performance or risk profile, if not addressed. MRAs also describe bank practices that result in substantive noncompliance with laws and regulations, enforcement actions, supervisory guidance, or conditions imposed in writing. To improve the utility of MRA as a tool for getting banks to address supervisory concerns in a timely manner, the Federal Reserve, OCC, and FDIC have issued updated guidance on policy and procedures related to the use of MRA. In June 2013, the Federal Reserve updated and clarified existing examiner guidance on communicating supervisory findings to banks. Board of Governors of the Federal Reserve System, Rating the Adequacy of Risk Management Processes and Internal Controls at State Member banks and Bank Holding Companies, SR95-51 (Nov. 14, 1995). In particular, the guidance addresses requirements for MRA and matters requiring immediate attention (MRIA)—those matters that pose potentially significant safety and soundness concerns, represent significant noncompliance with applicable laws or regulations, or repeat criticisms that have escalated in importance due to a bank’s insufficient attention or inaction included in examination or inspection reports or other supervisory communication. The guidance stipulates that MRA and MRIA concerning safety and soundness or consumer compliance must specify a time frame within which the banking organization must complete the corrective action. Examiners are expected to follow up and assess bank progress and verify satisfactory completion. If the follow-up indicates the organization’s corrective action has not been satisfactory, the guidance notes that additional formal or informal investigation or enforcement action might be necessary. Federal Reserve staff said they intended to rigorously track MRA and MRIA and their status across banks. OCC’s September 2011 examiner guidance stressed that early intervention, such as MRA or formal or informal enforcement action, is essential to resolving problems successfully. In determining the appropriate level and type of intervention, the guidance stated that examiners must consider the ongoing ability to correct problems and demonstrated performance of management and boards, and cautioned examiners not to mistake management or board’s cooperation and willingness with their ability to remediate problems, reduce risk, and improve the bank’s condition. In October 2014, OCC updated its policy and procedures on MRA in response to recommendations in the 2013 international peer review report that OCC enhance MRA communication, tracking, and resolution processes. The new MRA guidance emphasizes effective communication and prompt identification and correction of deficient practices (including those that are unsafe and unsound) before they affect the bank’s condition. The guidance requires that examiners track the supervisory concerns identified in the MRA. For example, for concerns that are open, examiners must categorize them as either new, repeat (if the same or substantially similar concern has recurred), self-identified (by the bank), past due (when the corrective action is not implemented in the expected time frame), escalated (when subsequent to the MRA OCC addressed the uncorrected action in an enforcement action), or pending validation (when the bank implemented the corrective action, but insufficient time has passed for it to demonstrate sustained performance). Examiners may categorize an MRA as closed if the bank implements and OCC verifies and validates the corrective action, or if the banks’ practices are no longer a concern because of a change in the bank’s circumstances. These new agency-wide tracking requirements of individual concerns within an MRA are intended to help improve macro-prudential metrics, which OCC staff stated will be useful for further sharpening supervisory tools and practices. In January 2010, FDIC issued examination guidance that outlined procedures for including matters requiring board attention—the FDIC equivalent of MRA—in examination reports and the tracking of such matters for follow-up purposes. In the guidance, FDIC recognized the significance of ensuring timely communication of identified deficiencies that require attention by the bank’s board and management and timely and effective follow-up by examiners to determine the institution’s progress in addressing those concerns. FDIC began conducting additional training for examiners on the effective use of the guidance in 2010, and followed with further training in 2014 and 2015. FDIC tracks matters requiring board attention and related issues and identifies those actions that are outstanding and requires examiner followup with bank management. Federal banking regulators told us that through FSOC they share and receive information on potential systemic risks, some of which may affect banks. FSOC’s three primary purposes under the Dodd-Frank Act are to identify risks to the financial stability of the United States that could arise from the material financial distress or failure, or ongoing activities, of large, interconnected bank holding companies and nonbank financial companies, as well as risks that could arise outside the financial services marketplace; promote market discipline by eliminating expectations on the part of shareholders, creditors, and counterparties of these large companies that the U.S. government will shield them from losses in the event of failure; and respond to emerging threats to the stability of the U.S. financial system. To achieve these purposes, the Dodd-Frank Act gave FSOC a number of important authorities that allow it to, among other things, collect information across the financial system so that regulators will be better prepared to address emerging threats and designate as systemically important certain nonbank financial companies and subject them to enhanced supervision by the Federal Reserve. The Dodd-Frank Act also established the Office of Financial Research (OFR) to serve FSOC and its member agencies by improving the quality, transparency, and accessibility of financial data and information, conducting and sponsoring research related to financial stability, and In September 2012, we promoting best practices in risk management.reported on challenges FSOC and OFR faced in fulfilling their missions, FSOC’s and OFR’s efforts to establish management structures and mechanisms to carry out their missions, and FSOC’s and OFR’s activities for supporting collaboration among members and external stakeholders. We made a number of recommendations to improve FSOC’s and OFR’s effectiveness. FSOC and OFR have made some progress in implementing these recommendations but additional attention on them is needed. Banking regulators also have taken steps to establish or enhance their internal capabilities for monitoring the financial system for emerging risks to banks. In 2010, the Federal Reserve established the Office of Financial Stability Policy and Research, to coordinate and support the Federal Reserve’s work on financial stability. Working with other divisions, the office identifies and analyzes potential threats to financial stability; monitors financial markets, institutions, and structures; and assesses and recommends policy alternatives to address these threats. Federal Reserve staff explained that this office is focused on thinking about risk to the financial system as a whole, including the shadow banking system, and identifying which features of the financial system are weak. When issues surface that are centered on the banking system, they said this office coordinates with the Division of Banking Supervision and Regulation to address them. If the issues are not centered on the U.S. banking system, they work with FSOC or other appropriate groups. OCC conducts agency-wide risk assessments through its National Risk Committee, which was formed in late 1990s to, among other things, monitor the condition of the federal banking system and emerging threats to the system’s safety and soundness. Members include senior agency officials who supervise banks of all sizes, and officials from the law, policy, and economics departments. The Committee meets biweekly to assess emerging risks and to evaluate and make recommendations on appropriate supervisory responses to address those risks The National Risk Committee also issues quarterly guidance to examiners that provides perspective on industry trends and highlights issues requiring supervisory attention. In response to the financial crisis, OCC staff said the National Risk Committee began publishing a public Semiannual Risk Perspectives Report to provide bankers and other market participants with OCC’s views on emerging risks facing the industry and OCC’s supervisory priorities. Also in response to the financial crisis, the National Risk Committee has developed various analytical tools as part of its monitoring efforts, including early warning metrics designed to identify early trends in financial markets, credit underwriting, credit performance, and bank performance. In response to the 2013 international peer review report recommendations, OCC staff said OCC established a pilot team in January 2015 to further develop and enhance OCC’s supervisory risk analysis functions. In response to the 2007-2009 financial crisis, FDIC broadened its institutional approach to the identification and management of risk. In 2011, the Board of Directors created the new position of Chief Risk Officer, and approved the creation of a new Enterprise Risk Committee. That committee includes division and office directors and meets at least monthly to review external and internal risks to FDIC. In 2014, the Enterprise Risk Committee established the External Risk Forum and the Management Risk Roundtable to focus specially on external risks. The Management Risk Roundtable serves as an interdivisional forum for coordinating risk analysis, while the External Risk Forum meets at least eight times per year to discuss external risk topics proposed by the Management Risk Roundtable. To further support the External Risk Forum, FDIC staff said FDIC continues to convene Regional Risk Committees semi-annually in each of the six FDIC supervisory regions. FDIC established these committees in 2003 to review and evaluate regional and economic banking trends and risks. The Federal Reserve, OCC, and FDIC also have established or enhanced programs to supervise the largest, most complex, and systemically important institutions, both in response to Dodd-Frank Act requirements and internal initiatives. Under the Dodd-Frank Act, the Federal Reserve has the responsibility for the supervision of systemically important financial institutions (SIFIs), including large bank holding companies, the U.S. operations of certain foreign banking organizations, and nonbank financial companies that are designated by FSOC for supervision by the Federal Reserve. The act also requires the Federal Reserve to impose a variety of regulatory reforms on SIFIs, including enhanced risk-based capital, leverage, and liquidity requirements. The Federal Reserve issued its final rule establishing enhanced prudential standards for bank holding companies in March 2014. To fulfill this mandate and to reorient its supervisory program in response to the supervisory lessons learned from the financial crisis, the Federal Reserve created the Large Institution Supervision Coordinating Committee, which is tasked with overseeing the supervision of the largest, most systemically important financial institutions in the United States. Federal Reserve staff said the committee was developed to provide strategic and policy direction for supervisory activities across the Federal Reserve System, improve the consistency and quality of supervision, incorporate systemic risk considerations, and monitor the execution of the resulting supervisory program. Federal Reserve staff noted that the committee takes a macroprudential perspective by considering information gleaned from its Quantitative Surveillance group, which is charged to identify systemic and firm-specific risks through macroeconomic scenarios and loss forecasts, financial market vulnerabilities, and measures of interconnectedness among firms. OCC supervises its largest and most complex banks through its Large Bank Supervision Program. The 2013 international peer review report recommended that OCC enhance risk identification by expanding the role of lead experts in its examinations. OCC announced in May 2014 that it would take steps to address the report’s findings and recommendations, for example, by expanding the responsibilities of its Large Bank Supervision lead expert program to improve analysis, systemic risk identification, quality control and assurance, and resource prioritization. The lead experts provide additional guidance during horizontal reviews into the strategy planning process for each large bank portfolio.Supervision program has also established and implemented its Large Bank Risk Committee, whose purpose includes discussing material portfolio risks, including emerging risks, and determining appropriate supervisory responses. According to OCC staff, OCC’s Large Bank FDIC was given significant new responsibilities under the Dodd-Frank Act to resolve failing systemically important financial companies. Specifically, FDIC obtained Orderly Liquidation Authority to resolve the largest and most complex bank holding companies and non-bank financial institutions, and the authority to review the resolution plans submitted by covered financial companies. In late 2010, FDIC established the Office of Complex Financial Institutions to carry out three core functions: (1) monitor risk within and across these large, complex firms from the standpoint of resolution; (2) conduct resolution planning and the development of strategies to respond to potential crises; and (3) coordinate with regulators overseas on the significant challenges associated with cross-border resolution. In 2011, the office established its complex financial institution monitoring program that is intended to engage in continuous review, analysis, examination, and assessment of key risks and control issues at institutions with assets over $100 billion. FDIC staff said that the office’s risk monitoring responsibilities were transferred to the FDIC’s Division of Risk Management and Supervision–Complex Financial Institutions group in early 2013. This group handles all institutions that are designated as systemically important, not by a specific asset size. As reported in our July 2012 report, the Federal Reserve and FDIC have taken certain regulatory actions mandated by the Dodd-Frank Act authorities toward facilitating orderly resolution, including efforts that could contribute to cross-border coordination. Specifically, certain large financial companies must provide the Federal Reserve and FDIC with periodic reports of their plans for rapid and orderly resolution in the event of material financial distress or failure under the Bankruptcy Code. For example, bank holding companies with $50 billion or more in total consolidated assets and nonbank financial companies designated for Federal Reserve supervision are to submit resolution plans on an annual basis. The resolution plans or living wills are to demonstrate how a company could be resolved in a rapid manner under the Bankruptcy Code. In 2014, FDIC and the Federal Reserve sent letters to a number of large financial companies identifying specific shortcomings with the resolution plans that those firms will need to address in their 2015 submissions, due on or before July 1, 2015, for the first group of filers. Ongoing monitoring of banking regulators’ efforts to identify and respond to emerging threats to the banking system can provide a starting point for identifying opportunities for more targeted and frequent assessments of these efforts. We have previously stated that identifying risks to U.S. financial stability and responding to emerging threats to stability are inherently challenging. It is important for oversight bodies such as IGs and the international auditing community to understand how the banking system could be vulnerable to such potential threats, so as to be better prepared to consider whether regulators are alert to and responsive to the buildup of risks in various markets and the threat of such risks to the broader banking system. As regulators implement a forward-looking approach to identify and respond to emerging risks to the banking system, a near real time assessment of regulators’ efforts could provide opportunities to identify weaknesses and provide timely suggestions to enhance their effectiveness. As such, we have developed a framework for oversight bodies and others to use to monitor regulatory efforts. Our framework has two objectives: (1) to monitor known emerging risks to the safety and soundness of the banking system; and (2) to monitor regulatory responses to these risks, including detecting trends in regulatory responses that might signal a weakening of regulatory oversight. We have developed a monitoring program around each of these objectives, described below. The first part of the framework focuses on monitoring emerging risks to the banking system. Emerging risks are vulnerabilities in the banking system which, given a shock or series of shocks outside the system, can cause the failure of a systemically important bank or multiple banks. Examples of vulnerabilities include a credit or asset price bubble, lax loan underwriting standards, insufficient bank capital or liquidity buffers to absorb losses or withdrawals, and risk exposure through a maturity mismatch between assets and liabilities. A triggering event or shock could be political or economic—such as turmoil in a region, the collapse of a market—or even result from a natural disaster. The first part of the framework centers around three key areas in the financial system in which risks to banks can emerge: (1) bank financial condition and performance, (2) asset markets in which banks have direct or indirect exposure, (3) and overall economic conditions. The framework identifies both qualitative and quantitative sources of information to help users identify and monitor known emerging risks to the banking system. Qualitative sources of publicly available information on emerging risks include regulatory, market, and academic reports and studies. For example, OCC semiannually publishes a report identifying emerging risks to its regulated institutions, and the Federal Reserve and OCC publish periodic surveys on underwriting practices at their regulated institutions, which can provide insights into potential emerging risks from the three key areas. Qualitative monitoring can also help identify financial innovations and new banking products and services that could pose risks to the banking system. FSOC and OFR annual reports, which identify potential systemic risks that can include risks to the banking system, are another source of information. In addition, market analyses, including those by trade publications, policy or research organizations, and the financial press often highlight industry trends, some involving risky bank behavior. Academics also may produce work discussing emerging trends in the banking industry including their potential impacts on bank capital, liquidity, or borrowers. To complement this review, the framework also identifies a set of financial indicators commonly used by regulators and market professionals that facilitate the monitoring of trends in banks’ financial condition, asset markets, and general economic conditions. Regularly reviewing financial data will allow oversight bodies to independently stay current with these trends, track known risks to the banking system as the risks evolve, and better understand the context for regulatory responses to these risks, as we discuss below. Such a review also promotes continuity to monitoring efforts, as qualitative information sources on emerging risks tend to provide new or updated information on a periodic basis. The framework includes financial indicators that reflect bank condition and performance that can provide insight into emerging risks at banks (and bank holding companies, in the case of the largest SIFIs), such as credit risk, liquidity risk, and market risk. For example, users can monitor capital levels and leverage, asset quality, ear nings trends, funding liquidity, and sector loan concentrations. As a result, they may be able to identify risk buildups or deteriorating credit trends. For example, rapid increases in the price of particular kinds of assets and concentrations relative to historical norms or increases in specific types of funding sources could indicate high levels of credit or maturity mismatches. These are all early warning indicators of bank vulnerabilities or buildup of risk that could lead to failure if not addressed effectively and in a timely manner. In the lead up to the most recent crisis, for example, house prices rose rapidly, and when the real estate bubble burst, banks with a significant concentration in commercial and residential real estate suffered heavy losses that wiped out their capital and ultimately led to their failures. The framework also includes indicators that detect changes in asset markets, such as sharp increases in asset prices or deviations from historical trends. In general, rapid growth in asset prices that leads to overvalued assets can create vulnerability in the financial system, including the banking sector, because the collapse of high prices can be destabilizing—especially if the assets are widely held and the values are supported by excessive leverage, maturity mismatch, or mispricing of risk. For example, the former Federal Reserve Chairman noted in a May 2013 speech that the collapse of housing prices and related mortgage losses during the recent crisis were concentrated in critical parts of the financial system, and amplified through various financial instruments, resulting in panic that led to asset fire sales and the collapse of the credit markets.Conversely, he said, the bursting of the tech bubble—the rapid decline of overvalued technology stocks in the equity markets in 2000 through 2001—did not result in systemic risk because the stock investments were not funded with excessive leverage and maturity mismatch. Commodities markets are very broad and they include soft commodities like agricultural products such as wheat, coffee, and sugar, or hard commodities such as oil, gold, and rubber. Risks to the banking sector may emerge from exposure to these markets. For example, some banks might have lending exposures on their balance sheets that could be impacted by falling oil prices. Should these loans become nonperforming or default, the banks would have to incur some losses unless they successfully hedged the risk. ability to obtain new credit from banks as any additional debt could put a strain on their repayment capacity. Similarly, volatility tends to be negatively correlated with market performance. That is, volatility tends to decline as the stock market rises and increase as the stock market falls. When volatility increases, risk increases and expected returns decrease and this in turn could negatively impact access and the availability of credits to businesses and households. Finally, our framework includes indicators that monitor the overall health of the broader economy. Interconnectedness and risk exposures among the financial sector and broader economy can magnify systemic risks. For example, the former Federal Reserve Chairman said that highly leveraged households and businesses are less able to withstand adverse changes in income and wealth, such as when financially stressed firms are forced to lay off workers who, lacking financial reserves, sharply cut their own spending. Such stress in the nonfinancial sector can adversely affect banks, as borrowers begin to default on mortgages and other types of consumer and business credit. As happened in the 2007- 2009 financial crisis, this can create a cycle where housing market instability becomes self-reinforcing as banks reduce lending and shed assets to conserve capital, thereby further weakening the financial positions of households and firms. Thus, in monitoring for information on emerging risks to banks’ safety and soundness, it is important to pay attention to trends in the broader economy that could amplify these risks (such as trends in household income and debt, unemployment and gross domestic product) and their potential impact. The second half of the framework focuses on monitoring corresponding regulatory responses to emerging risks with the goal of flagging issues for further review where the response may not be clear or questions have arisen as to whether these measures have mitigated the risk. The review of regulatory responses builds on the financial monitoring efforts previously discussed, coupling efforts to better understand current financial conditions and emerging risks with an enhanced understanding of the regulatory efforts under way to address such conditions and risks. Thus, the monitoring of regulatory responses includes the analysis of regulatory actions taken to address emerging risks. Regulators can respond to emerging risks in the banking sector with a variety of supervisory tools. These include microprudential tools, which traditionally have focused on the safety and soundness of individual financial institutions, and macroprudential tools, which can be used to address vulnerabilities across the banking system and broader financial system. Microprudential tools include examinations and capital regulation for individual institutions; macroprudential policy tools include underwriting standards and countercyclical capital buffers. Supervisory tools intended to address emerging risks can also be structural or cyclical. Structural tools are intended to build resiliency of regulated institutions to vulnerabilities, while cyclical tools are intended to limit vulnerabilities by restraining financial institutions from excesses. Capital regulation is an example of a structural tool because requiring banks to hold more and higher quality capital improves the ability of regulated financial institutions to withstand losses and maintain lending after a bubble has burst. Countercyclical capital buffers, on the other hand, are an example of a cyclical tool because they are intended to counter excessive credit growth that can fuel asset bubbles. Supervisory stress tests are tools that include both structural and cyclical aspects. In monitoring regulatory responses to emerging risks, it is important to identify the full range of tools regulators might employ to address such risks, the goals of these tools, and their potential tradeoffs. For example, a 2013 Federal Reserve Bank of New York staff report noted that microprudential tools have largely been developed and evaluated on the basis of the safety and soundness of individual institutions, not with respect to the effects on financial stability of practices that are common to many institutions, and it will be important to continue to evaluate their effectiveness in this context. Further, while microprudential and macroprudential policy tools can be complementary, these two approaches might also conflict with each other. Moreover, as a former Federal Reserve Board member noted, regulatory tools that aim to increase the resilience of regulated institutions and limit potential asset bubbles by restraining the growth of lending by such institutions can be circumvented when financial activities migrate into less regulated parts of the financial system such as the shadow banking sector. As such, she said that credit extension and associated vulnerabilities can increase outside the heavily regulated banking system. To mitigate the risks that may emerge as a result, effective and timely coordination among the banking and other relevant financial regulators is essential. These analyses could provide information on regulators’ willingness and ability to take prompt and forceful actions to mitigate problematic behavior at banks. They could also signal potential procyclical effects of regulation; that is, when regulation may not adequately discourage overly risky behavior during economic upswings or may inhibit bank lending during downturns, as banks may need to meet requirements during times when it is more difficult to do so. A leveraged loan is a loan where the obligor’s post financing leverage as measured by debt-to-assets, debt-to-equity, cash flow-to-total debt, or other such standards unique to particular industries significantly exceeds industry norms for leverage. Leveraged borrowers typically have a diminished ability to adjust to unexpected events and changes in business conditions because of their higher ratio of total liabilities to capital. These loans are usually structured, arranged, and administered by one or several commercial or investment banks known as arrangers. They are then sold, (or syndicated) to other banks or institutional investors. lending volumes, leveraged lending loan losses, and reports on underwriting standards. Where questions exist on regulators’ efforts to mitigate emerging risks, including the propensity for such a risk to migrate to a less regulated sector of the market, framework users can prioritize those issues for further internal discussion and reach out to the regulators to obtain clarification if necessary. In applying the framework, there may be instances where users identify issues that regulators may not consider to be emerging risks, but others do, such as market participants or market researchers. Users of the framework may also identify potential issues through their independent review of source material. Such discrepancies may raise questions about regulatory processes for monitoring and identifying emerging risks and warrant additional follow up with regulators. Some issues may not represent an emerging risk to the banking system, but may raise questions about regulatory oversight over banks or banking activities. As users apply the framework, it is essential they develop processes to systematically evaluate the information gathered and identify and prioritize those issues which merit continued monitoring and an assessment of regulatory responses. In many cases, as illustrated earlier, users of the framework can identify potential issues and assess regulatory responses to them, determining to conduct additional follow up with regulators only where this initial review reveals significant concerns that a particular risk might not be effectively mitigated. Trends in examination data, such as CAMELS ratings, can provide information on regulators’ identification of and response to concerns of banking safety and soundness. Our framework uses CAMELS ratings to monitor regulatory activity in two ways: (1) trend analysis of composite and component CAMELS ratings for insights into emerging risks regulators have identified and (2) an econometric model that identifies shifts in regulators’ assignment of CAMELS ratings relevant to bank financial data. Regulators formulate the CAMELS composite ratings using the individual component ratings, but the rating is not a mathematical average of the components. Individual component ratings may be lower or higher compared with the overall composite rating assigned. As discussed earlier, banking regulators generally consider banks with a composite rating of 1 or 2 to be healthy, while banks receiving an unsatisfactory examination warrant a composite rating of 3 or above. Monitoring trends in CAMELS ratings could provide insights into risks that are emerging in the banking system and prompt further review into the actions regulators are taking to respond to those risks. To illustrate, in our June 2011 report on PCA, we found that increases in CAMELS composite or component ratings can serve as warning signals of distress in banks. While most banks that failed degraded from a CAMELS composite rating of 2 to a 4 in one quarter, they generally had at least one component rating of a 3 prior to failure. Specifically, among the 292 failed banks we reviewed (across all regulators) as part of our study, most (76 percent) received at least one individual component CAMELS rating of a 3 before failure. At the same time, most (65 percent) also moved past the composite CAMELS 3 rating in a single quarter (e.g., moving from a 2 to 4) before failure, as the CAMELS composite ratings generally deteriorated precipitously. As we discussed earlier, CAMELS ratings have not always reflected long- term risk factors, particularly with respect to poor management practices. While trend analysis of CAMELS data is useful for spotting affirmative regulatory actions—decisions to downgrade (or upgrade) ratings in response to examination findings—such analysis is limited in that it does not provide information when regulators are not changing CAMELS ratings in response to observed bank conditions. Ideally, in applying our framework, users could identify any issues or challenges regulators are facing in mitigating emerging risks at banks before problems manifest themselves on the balance sheets. To better observe changes in regulatory behavior as banking and economic conditions change, we are exploring the potential of using econometric models to monitor for shifts in regulatory behavior, which may help identify periods where regulators are having difficulty reigning in risky behavior or are changing the levels of regulatory discretion they apply to their supervision activities. Such models could also assist in placing into context issues that arise during reviews of banking regulators and prompt further follow up with regulators to understand more fully the reasons behind the changes. Trends in enforcement activity also can provide information on regulatory responses to emerging risks. For example, 2005-2007 was a period of strong earnings growth and profitability in the banking industry (see fig. 1). During this time, three banks failed (all in 2007) and 76 intuitions or fewer were on the problem bank list. This period of growth and profitability was largely fueled by aggressive growth in higher risk mortgage-related loans and funded by more volatile sources such as brokered funds and wholesale short-term borrowing. However, enforcement activity was From 2005 to 2007, the three regulators relatively low (see fig. 2).issued a total of 740 informal enforcement actions and 392 formal actions, an average of 247 informal actions and 131 formal actions per year during that time frame. Once the crisis began, and banks began suffering losses, the level of informal and formal enforcement actions surged as did the number of problem banks and failed banks. For example, from 2008 to 2010, the three regulators issued a total of 2,513 informal actions and 1,871 formal enforcement actions. This averaged about 838 informal actions and 624 formal actions per year. Since 2010, as the crisis and its effects began to abate, both informal and formal actions steadily declined. Monitoring trends in enforcement activity can provide oversight bodies insight into identified risks and regulatory responses to those risks. When evaluating trends in enforcement actions, it is important to understand the underlying deficiencies in bank practice and performance, as enforcement actions are taken for many reasons. Understanding trends in enforcement activity in relation to identified risks could allow auditors to observe the rigorousness of regulatory responses to such risks. In doing so, it is important to also consider available information on other regulatory responses to identified risks. For example, reviewing trends in MRA, particularly those outstanding or repeat, could provide additional insights about regulators’ efforts to take effective action to promptly address problems at banks. As with trend analysis of CAMELS ratings, trend analysis of MRA and other informal and formal enforcement actions might be useful for spotting changes in affirmative regulatory decisions to act on examination findings—that is, an increase in the number of MRA related to credit administration would indicate that examiners were concerned about risk management practices at banks and were flagging these issues for banks to address. While regulators have committed to using MRA more aggressively, determining whether they have done so requires an in depth review of examination findings and regulators’ actions to address them in accordance with their policies and procedures. Such a review could be conducted on a regular basis, or used as a more tailored mechanism in response to findings from other monitoring activities. Our framework also recognizes that regulators can respond to emerging problems through regulation or guidance for the industry. Tracking the issuance of agency guidance and regulations in response to emerging issues will allow users of the framework to better understand how regulators deal with a particular risk and also allow them to flag potential issues in the efficiency and effectiveness of interagency coordination in response to risks that affect the banking system. For guidance and regulation to be effective, they must be issued in a timely manner. As noted earlier, although banking regulators were concerned about the rapid buildup of risky CRE concentrations across the banking system, staff from one regulator said they acted too late in drafting and issuing interagency guidance for the industry. Losses on these higher risk loans were a primary factor in bank failures resulting from the financial crisis. The 2013 OCC peer review noted that delays in the issuance of guidance or regulation to address emerging risks can be demoralizing for examiners who may perceive that agency management has not acted on their risk identification and warnings.guidance or regulation, users of the framework would monitor quantitative and qualitative sources for trend information on the identified problem area for evidence that risk was increasing, flagging potentially harmful delays in regulatory action for further follow up. To monitor the timeliness of For guidance and regulation to be effective, they also must serve to mitigate the emerging risk. For that reason, we plan to review quantitative and qualitative sources for information on the effectiveness of guidance or regulation in addressing the problem identified. For example, in April 2013, Federal Reserve and OCC staff issued a study analyzing the impact of the 2006 CRE guidance. was underway, banks responded to market conditions and the guidance by shrinking their holdings of CRE loans, particularly for higher-risk ADC. Should CRE lending show strong growth in the future, it will be important to continue to monitor the effectiveness of the guidance for curbing excessive risks in CRE lending while institutions are still profitable. The study found that once the crisis Banking regulators’ primary objective is the promotion of safety and soundness of banks and the banking system. Effective regulation and supervision can, in turn, provide an important safeguard against future financial crises and provide an important source of confidence to the market about the general health and resiliency of the banking sector. Lessons learned from past banking-related crises identified the need for federal banking regulators to respond proactively to problems developing in the banking system. Building on these lessons, we plan to implement our framework to monitor regulatory responses to emerging risks to the banking system. We intend to refine our framework over time by incorporating new sources of qualitative and quantitative information on emerging risks and by developing additional models as new analytical tools to aid in the monitoring and evaluating of regulatory responses to these risks become available. Friend, Keith; and Glenos, Harry (OCC), and Nichols, Joseph B. (Federal Reserve), An Analysis of the Impact of the Commercial Real Estate Guidance (Washington, D.C.: April 2013). We are not making recommendations in this report. We provided a copy of this draft report to the Federal Reserve, FDIC, OCC for review and comment. The agencies did not offer formal comments but each agency provided technical comments, which we have incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees and members and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. The thrift and commercial bank crises that emerged in the 1980s and the 2007-2009 financial crisis raised questions on our part about federal banking regulators’ efforts to learn from past weaknesses in regulatory oversight over insured banks and apply the appropriate lessons learned. Such regulatory lessons learned also may offer potential insights for Congress, the auditing community, and other “watchdog” entities in more proactively assessing federal banking regulators’ efforts to identify and respond to potential emerging risks to insured banks. This report (1) examines regulatory lessons learned from the 1980s thrift and commercial bank crises and the 2007-2009 financial crisis, focusing on the efforts of federal banking regulators to identify and address emerging risks to the solvency of insured banks before the onset of these crises; and (2) offers a strategy that we and other oversight bodies, such as inspectors general (IGs) and the international auditing community (hereafter, oversight bodies) can use to provide continuous future oversight of regulatory responses to emerging risks. To identify regulatory lessons learned from the crises we reviewed and analyzed studies by GAO, federal banking regulator IGs, the federal banking regulators, and academics. To identify relevant academic studies, we performed the literature search using the following databases: ProQuest (which included SSRN, EconLit, and ABI/INFORM Global), JSTOR, and NBER, using the following keywords or combinations of them: financial crisis, savings and loan, thrift, lessons learned, regulatory action, banking, and great recession. We performed these searches for the period between January 1980 (the commencement of the 1980s thrift and banking crises) and August 2013 and identified 24 studies. We reviewed each study to identify those lessons learned that pertained specifically to regulatory efforts to identify and address emerging risks to the banking system in the years leading up to the crises. We did not identify many studies on relevant lessons learned from this search. As such, we determined to rely largely on our own prior work in the area. We also interviewed the federal banking regulators—the Federal Deposit Insurance Corporation (FDIC), the Office of the Comptroller of the Currency (OCC), and the Board of Governors of the Federal Reserve System (Federal Reserve)—and two of their IGs for their perspective on regulatory lessons learned and regulatory actions taken to address these lessons learned. We analyzed the information we gathered to identify common and unique challenges regulators faced across the crises in identifying emerging risks and responding to them effectively. To incorporate the regulatory lessons learned into a strategy that oversight bodies and others can use to monitor regulatory responses to emerging risks, we established a framework for monitoring (1) known emerging risks to the safety and soundness of the banking system, and (2) regulatory responses to these risks, including detecting trends in regulatory responses that might signal a weakening of regulatory oversight. To develop the first part of our framework—monitoring known emerging risks to the safety and soundness of the banking system—we first reviewed frameworks or programs for monitoring domestic and global financial systems that included banking systems. We sought to identify relevant frameworks developed by federal banking regulators and federal agencies through our interviews with the regulators and prior audit work. We identified relevant frameworks developed by the Federal Reserve, OCC, and the Office of Financial Research (OFR), as well as a banking profile published quarterly by FDIC. We also sought to identify relevant frameworks developed by foreign banking regulators and international organizations that focus on global finance or banking issues. Through our review, we identified relevant frameworks and monitoring programs developed by the following entities: the European Central Bank, the Financial Stability Board, the International Monetary Fund, and the Bank for International Settlements. We analyzed these domestic and global frameworks and programs to identify key areas where risks to the banking system could arise and identified three: bank condition and financial performance, asset markets in which banks may have direct or indirect exposure, and overall economic conditions. First, potential sources of risk can emerge from within the banking sector, such as banks’ business models, size, scope of operations, and organizational complexity, among other things. Other risks that can arise from bank condition are driven by risk management practices, loan portfolio composition, and underwriting standards. Second, risk emanating from banks could spread and spillover to other industry sectors. Risks to the banking system can originate from other areas of the financial system, particularly asset markets in which banks participate either directly or indirectly. Developments in these asset markets such as rapid asset price growth or decline can have a direct impact on bank portfolios and their capacity to access funding in a cost-effective way. The third area we identified through our analysis as a potential area where risk to banks could stem from was the broader economy, in that economic conditions generally impact asset markets and the profitability of banks and bank customers and counterparties. A growing economy with low unemployment tends to have a more favorable impact on banks and markets than a recessionary economy with high unemployed workers. As part of our framework, we also identified financial indicators that will assist users of the framework in monitoring potential risks to the banking industry emerging from the focus areas. For example, a number of our indicators for bank condition and safety and soundness are derived from the Uniform Financial Institutions Rating System, commonly known as CAMELS. Regulators use this ratings system to, among other things, assess the soundness of banks on a uniform basis, identify those institutions requiring special supervisory attention, monitor aggregate trends in overall soundness of financial institutions, and assess their exposure to risks. The ratings reflect a bank’s condition in six categories or CAMELS’ components: capital adequacy, asset quality, management, earnings, liquidity and sensitivity to market risk. For each CAMELS component other than management, there are a number of financial ratios that can be calculated based on Reports of Condition and Income (Call Report) data that assist in the evaluation of how well or poorly a bank is performing in that category. We selected those ratios that could be determined quantitatively based on Call Report data. For example, the management component of the CAMELS does not lend itself to the same computation as that used for ratios based on capital or earnings. It requires more qualitative assessment by examiners and therefore it is more discretionary and more subjective than the other CAMELS components. Also, all the ratios that pertain to a component need not be included to show a trend. For example, in asset quality, we may choose to illustrate loans that are 90 days or more past due rather than showing also those that are 30 days past due and 60 days past due because the implications of loans 90 days or more past due are more severe. From the monitoring frameworks we reviewed, we identified indicators that track asset price growth in key markets—including the residential and commercial real estate markets, equity market, Treasury market, corporate bond market, and the commodities market. In addition, we include indicators that track leverage and volatility that could impact the banking system. We also identified indicators that track the overall health of the broader economy from the monitoring frameworks we reviewed, such as household income and debt, unemployment, and gross domestic product. In addition to financial indicators, our framework also incorporates publicly available qualitative information on emerging risks to the banking sector from banking regulators, and other entities that might have a unique or varying perspective on emerging risks, such as investors, rating agencies, trade associations, and academics. Our framework does not prescribe specific entities or sources to review, rather, we recommend incorporating a wide range of available analyses and perspectives. In developing the second part of our framework, we identified the range of supervisory tools that banking regulators have available to them to respond to emerging issues in banks and the banking system, including both microprudential and macroprudential tools. From our prior work, we identified microprudential tools, which traditionally have focused on the safety and soundness of individual financial institutions. From the monitoring frameworks we reviewed, we identified examples of macroprudential policy tools, which can be used to address risk emerging across the banking system and broader financial system. We also identified those supervisory tools that can be observed and analyzed over time to monitor for changes in regulatory behavior and that could signal potential weaknesses in regulatory oversight—such as examinations and enforcement actions. We did this by reviewing our prior work in assessing regulatory responses to risks as they emerged in the lead up to the 1980s bank and thrift crises and the 2007-2009 financial crisis. To supplement this effort, we interviewed a judgmental sample of financial market specialists for their views on those regulatory activities that could be effectively monitored to detect meaningful changes in regulatory behavior. Because the information and type of analysis we were interested in required the knowledge of both regulatory activities and financial trends, we interviewed a purposive or non-generalizable sample of seven financial market specialists on their views of regulatory activities that could be effectively monitored to detect meaningful changes in regulatory behavior. To ensure the financial market specialists represented a broad range of views and professional experience, we recruited participants from government, academia, and business who had in-depth knowledge of the 1980s thrift and commercial bank crises or the 2007-2009 financial crisis as evidenced by holding key leadership positions in government or industry or having published relevant academic research on the regulation of financial services. We identified these financial market specialists through prior GAO studies, academic publications, and from recommendations by other financial market specialists. To illustrate trends in enforcement activity across various economic cycles, we obtained data on the number and type of enforcement actions taken against financial institutions supervised by OCC, FDIC, and the Federal Reserve, published in their annual reports dated 2005 through 2014. We have assessed the reliability of federal banking regulators’ enforcement action data as part of previous studies and found the data to be reliable for the purposes of our review, which is to illustrate trends in informal and formal enforcement actions. We also obtained Call Report data from SNL Financial database on the yields on earnings assets of financial institutions from 2005 to 2014 for four bank size groups. These bank size groups include (1) banks with over $50 billion in assets, (2) banks with assets between $10 billion and $50 billion, (3) banks with more than $1 billion but less than $10 billion in assets, and (4) banks with $1 billion or less in assets. We assessed the reliability of the SNL Financial data by reviewing existing information about the data and the system that produced them. In addition, we have assessed the reliability of SNL Financial data as part of previous studies. As such, we found the data to be reliable for the purposes of our review, which was to illustrate trends bank profitability over time. We conducted this performance audit from February 2013 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, A. Nicole Clowers (Managing Director), Karen Tremba (Assistant Director), Stefanie Jonkman (Assistant Director/Analyst-in-Charge), Abigail Brown, William Cordrey, Janet Eackloff, M’Baye Diagne, Bethany Benitez, Pamela Davidson, Rachel DeMarcus, Maksim Glikman, Michael Hansen, Risto Laboski, Marc Molino, Robert Pollard, and Barbara Roesmann made key contributions to this report. | Weakness in federal oversight was one of many factors that contributed to the size of federal losses and the number of bank failures in banking-related crises over the past 35 years—including the 1980s thrift and commercial bank crises and the 2007–2009 financial crisis. Resolving the failures of banks and thrifts due to these crises resulted in estimated costs to federal bank and thrift insurance funds over $165 billion, as well as other federal government costs, such as taxpayer-funded assistance during the financial crises. Ongoing monitoring of banking regulators' efforts to identify and respond to emerging threats to the banking system can provide a starting point for identifying opportunities for more targeted and frequent assessments of these efforts. This report (1) discusses regulatory lessons learned from these past crises and (2) offers a framework that GAO and other oversight bodies, such as inspectors general, can use to provide continuous future oversight of regulatory responses to emerging risks. To do this work, GAO reviewed its prior studies and those of federal banking regulators, the regulators' inspectors general, and academics that evaluated regulators' efforts to identify and respond to risks that led to bank failures in past crises. In developing an oversight framework, GAO reviewed frameworks for monitoring domestic and global financial systems to identify key areas in which risks to banks can arise. GAO interviewed regulators to identify supervisory actions that can be used to respond to emerging risks. Past banking-related crises highlight a number of regulatory lessons learned. These include the importance of Early and forceful action. GAO's past work on failed banks found that regulators frequently identified weak management practices that involved the banks in higher-risk activities early on in each crisis, before banks began experiencing declines in capital. However, regulators were not always effective in directing bank management to address underlying problems before bank capital began to decline and it was often too late to avoid failure. For example, examiners did not always press bank management to address problems promptly or issue timely enforcement actions. Forward-looking assessments of risk. The crises revealed limitations in key supervisory tools for monitoring and addressing emerging risks. During examinations, examiners did not always incorporate forward-looking information when assigning supervisory ratings based on banks' exposure to risk. For example, ratings did not consistently reflect factors such as poor risk-management practices that while not causing losses in the short term, caused losses in the long term. Considering risks from the broader financial system. The 2007–2009 financial crisis demonstrated that risks to bank safety and soundness could not be assessed by looking only at the performance and activities of individual banks or groups of banks. Rather, regulators must look across the financial system to identify emerging risks. In response to these lessons learned, regulators said they have taken a number of steps intended to improve their ability to identify and respond to emerging risks—including instituting more granular tracking of bank compliance with examination recommendations to address emerging problems in a timely manner; incorporating more forward-looking elements into supervisory tools; and participating in systemic risk-monitoring efforts as members of Financial Stability Oversight Council. GAO and others have begun to review some of these initiatives. GAO has incorporated the regulatory lessons learned into a two-part framework for monitoring regulators' efforts to identify and respond to emerging risks to the banking system. First, the framework incorporates quantitative information in the form of financial indicators that can help users of the framework track and analyze emerging risks and qualitative sources of information on emerging risks—such as regulatory reports and industry and academic studies. Second, the framework monitors regulatory responses to emerging risks, such as agency guidance, with the goal of flagging issues for further review when questions arise about the effectiveness of these responses. Users—oversight bodies such as inspectors general—can analyze regulatory actions taken to address emerging risks and gain insights into regulators' ability to take forceful actions to address problematic behavior at banks. Such ongoing monitoring can provide a starting point for identifying opportunities for more targeted and frequent assessments of these efforts. GAO plans to implement this framework in its future work. |
The National Institutes of Health (NIH) emphasizes lowering cholesterol as an important aspect of preventing coronary heart disease. In 1985, NIH’s National Heart, Lung, and Blood Institute (NHLBI) initiated the National Cholesterol Education Program (NCEP), which has undertaken a major effort to encourage individuals to measure, track, and reduce their cholesterol levels (notably total and low-density lipoprotein (LDL) cholestrol) with the objective of reducing mortality and morbidity from coronary heart disease. The focus on cholesterol reduction has come at a time when increased emphasis has also been given to modifying other risk factors associated with heart disease such as cigarette smoking and hypertension. One aspect of the efforts to broaden awareness of cholesterol as a risk factor has been to encourage individuals to “know your cholesterol number.” This advice has been heeded by the public. According to data compiled by the Centers for Disease Control and Prevention (CDC) from 47 states and the District of Columbia, the percentage of adults who reported having had their total cholesterol checked in the past 5 years ranged from 56 percent in New Mexico to 71 percent in Connecticut (median across the states sampled: 64 percent). The percentage of persons who had been told their cholesterol is high by a health professional ranged from 14 percent in New Mexico to 21 percent in Michigan (median across the states sampled: 17 percent). For a widespread cholesterol-lowering campaign to be credible, however, test results must be accurate across the diverse devices and settings in which cholesterol is measured. This is because the guidelines for treating elevated cholesterol are predicated on test results that place an individual into different risk categories. In this report, we discuss what is known about the accuracy of cholesterol testing, including how it is measured, factors that hinder accurate measurements, and efforts to improve the accuracy of cholesterol tests. Coronary heart disease is one of the leading causes of death for both men and women in the United States, accounting for 478,530 deaths in 1991, according to the American Heart Association (AHA). Of these deaths, 52 percent were men and 48 percent were women. Approximately 6.3 million people alive today in the United States have a history of heart attack, chest pain, or both; of this group, 44 percent are 60 years of age and older, 25 percent are 40 to 59 years old, and 31 percent are younger than 40. Further, 1.5 million Americans are expected to suffer a heart attack in 1994. The death rate from heart attack in the United States, however, declined 32 percent between 1981 and 1991. Reasons cited as contributing to this decline include improved medical care of patients and preventive measures in the population. AHA estimates that total costs associated with coronary heart disease are $56.3 billion per year. Of this figure, $37.2 billion is spent on hospital and nursing home services, $8.7 billion on physicians and nurses services, and $2.4 billion on drugs. Lost output associated with heart disease is valued at $8 billion. Because of the large sums being spent on treatment, to say nothing of the attendant psychological and social costs, prevention has been emphasized. NHLBI has established several education programs, such as NCEP, to inform the public about different risk factors associated with coronary heart disease and to provide guidelines for reducing risks that are modifiable. Other programs include the National High Blood Pressure Education Program, which began in 1972; the Smoking Education Program (1985); and the Obesity Education Initiative (1991). A consensus development conference of scientific experts brought together by NIH in 1984 concluded that the risk of coronary heart disease is positively related to increased levels of serum cholesterol and that lowering elevated cholesterol levels can reduce coronary heart disease risk for individuals. The conference experts based their conclusions on the accumulated evidence from a large body of epidemiological, animal, metabolic, and clinical studies. Of major importance was the results of the Lipid Research Clinics Coronary Primary Prevention Trial, a large randomized study completed in 1984 that provided evidence that treatment to lower high cholesterol levels in patients can reduce the risk of coronary heart disease. The conference experts further recommended plans for establishing the National Cholesterol Education Program, which began in 1985. NCEP has convened several expert panels and issued a series of guidelines, reports, and educational materials on the management and control of cholesterol for health care professionals and the general public. The program emphasizes two parallel approaches: (1) a clinical approach that attempts to identify and treat individuals who are at high risk and (2) a broader population approach that aims to reduce cholesterol levels for the entire population. Clinical guidelines for reducing elevated cholesterol levels in adults over 20 years of age were first issued by the Adult Treatment Panel in 1987 and were subsequently updated in a second expert panel report in 1993. These guidelines cover the classification of cholesterol, patient evaluation, and dietary and drug treatments. In 1990, NCEP outlined population strategies to lower total and LDL cholesterol by encouraging all Americans to be aware that elevated cholesterol is a potential risk factor for coronary heart disease, have their cholesterol measured at regular intervals, and modify their diet. NCEP published another report in 1991 that addressed cholesterol issues in children and adolescents. It emphasized strategies for encouraging the nation’s youths to reduce their intake of saturated fat and cholesterol as well as identifying and treating those whose high serum cholesterol levels put them at increased risk for heart disease as adults. The recommendations made in NCEP reports are disseminated and implemented through 40 agencies, such as AHA, that conduct health education and information activities. The current NCEP adult treatment guidelines emphasize classification and treatment decisions based on a person’s risk status, which is defined not only by serum cholesterol levels (including total cholesterol and its low-density lipoprotein (LDL) and high-density lipoprotein (HDL) components) but also by what other coronary risk factors are present. Those with symptoms of coronary heart disease or with at least two other coronary heart disease risk factors are considered candidates for more intensive treatment. Hypertension (>140/90 mm Hg, or on antihypertensive medication) Current cigarette smoking Diabetes Family history of myocardial infarction or sudden death before age 55 in father or male sibling, before age 65 in mother or female sibling Age: male >45 years of age or female >55 years of age or postmenopausal and not on estrogen replacement therapy Low HDL cholesterol (<35 mg/dL) A negative risk factor is HDL cholesterol >60 mg/dL. The guidelines recommend that all adults have their total cholesterol measured at least once every 5 years and that HDL cholesterol be measured at the same time. As shown in figure 1.1, adults without evidence of existing coronary heart disease are classified initially into three levels based on total cholesterol levels—desirable (below (<) 200 mg/dL), borderline high (200-239 mg/dL), and high (equal to or above (>) 240 mg/dL). An HDL cholesterol level of less than 35 mg/dL is considered low and a contributing risk factor for coronary heart disease. The cutpoints for total cholesterol are based largely on epidemiological data that have shown that the risk of heart disease increases as cholesterol levels rise. For example, in 361,000 men screened for the Multiple Risk Factor Intervention Trial, those at or above the 90th percentile of total cholesterol, about 263 mg/dL, had a four times greater risk of death from coronary heart disease than those in the bottom 20 percent (< 182 mg/dL). As indicated in figure 1.1, individuals are recommended for a followup lipoprotein analysis depending on an assessment of their total cholesterol and HDL cholesterol levels in conjunction with the presence or absence of other coronary heart disease risk factors. Thus, those who would be candidates for a subsequent lipoprotein analysis include individuals with (1) high total cholesterol (‡ 240 mg/dL), (2) borderline-high cholesterol (200-239 mg/dL) and low HDL cholesterol (<35 mg/dL), or (3) borderline-high cholesterol (200-239 mg/dL), higher HDL cholesterol (>35 mg/dL), and two or more risk factors. Lipoprotein analysis includes measurement of fasting levels of total cholesterol, HDL cholesterol, and triglycerides and the calculation of LDL cholesterol, which is derived by a mathematical formula. The subsequent classification of adults based on LDL cholesterol levels is shown in figure 1.2. NCEP also classifies LDL cholesterol into three levels—desirable (<130 mg/dL), borderline-high risk (130-159 mg/dL), and high risk (>160 mg/dL). Decisions for beginning diet or drug treatment are then based on these levels in combination with other risk factors (see table 1.1). Thus, candidates for diet therapy without known symptoms of coronary heart disease include those with high LDL cholesterol (>160 mg/dL) or those with borderline-high LDL cholesterol (130-159 mg/dL) plus two or more risk factors. NCEP recommends diet therapy as a first line of treatment for most patients except those at particularly high risk who may warrant drug intervention immediately, such as individuals with existing coronary heart disease. NCEP’s recommended step I and step II diets are designed to reduce consumption of saturated fat and cholesterol and to promote weight loss in overweight patients. If diet therapy is ineffective at lowering LDL cholesterol levels, then drug treatment is advised. NCEP has developed a series of guidelines for administering different types of drugs that are available to lower cholesterol. It should be noted that initiating drug treatment commits patients to long-term therapy, which may be for the rest of their lives. Some perspective on what these treatment categories and recommendations mean for Americans can be seen in recently collected, nationally representative data from the first phase of the National Health and Nutrition Examination Survey (NHANES III). These data indicate that the average total serum cholesterol level is 205 mg/dL for men 20 years old and older and 207 mg/dL for women 20 years old and older. As shown in figure 1.3, women tend to have lower total cholesterol levels compared to men up until the ages of 45 to 54, at which time it increases to levels above those of men. This difference may be attributed, in part, to menopause, which influences women’s lipid and hormonal levels. Whether this increases women’s coronary heart disease risk is not clear, according to some research. Overall, women appear to have higher HDL cholesterol levels then men do, which may also account for part of this difference. While trend data indicate that cholesterol levels have declined since the early 1960’s, 52 million U.S. adults, or 29 percent, have an LDL cholesterol level that is classified as borderline-high or high according to the NCEP guidelines and that, when combined with other risk factors, makes them candidates for dietary therapy. Of the 52 million adults mentioned above, about 12.7 million have cholesterol levels sufficiently elevated that they might be candidates for drug therapy (about one third of this group would be patients with coronary heart disease). NCEP’s Laboratory Standardization Panel (LSP) has issued two reports on cholesterol measurement. The first report, issued in 1988, focused attention on the importance of accurate measurements. In the report’s introduction, the panel stated: “the current state of reliability of blood cholesterol measurements made in the United States suggests that considerable inaccuracy in cholesterol testing exists.” That report, along with press articles critical of cholesterol testing in 1987, drew attention to the need for more consistent and replicable results. In addition to outlining the state of the art in cholesterol testing, the NCEP/LSP reports describe factors that can affect test accuracy and reliability: analytical problems (laboratory analyzer inaccuracy and imprecision) and preanalytical factors (biological variation, disease, conditions under which a sample is taken). The second report published in 1990 also contains a number of recommendations to improve laboratory testing systems. These include using only analytical systems whose standardization process is linked to the National Reference System for Cholesterol (NRS/CHOL, discussed in chapter 3), participating in external surveillance programs (proficiency testing), comparing results with other laboratories, and using quality controls to monitor analytical performance. Recognizing the problem of measurement variability in cholesterol testing, the Adult Treatment and Laboratory Standardization Panels recommended that total and LDL cholesterol be measured on two separate occasions and averaged together. If the cholesterol results differ by 30 mg/dL or more, then a third test should be conducted and the three tests averaged together to assess an individual’s cholesterol level. NCEP/LSP established the goal that a single serum total cholesterol measurement should be accurate within +8.9 percent. This goal of +8.9 percent was effective in 1992, replacing the interim goal of +14.2 percent that had been established in 1988. NCEP has not previously issued goals for HDL and LDL cholesterol measurement; however, an expert panel convened by NCEP has recently developed such goals and they are expected to be published shortly. The Clinical Laboratory Improvement Amendments of 1988 (Public Law 100-578) also mandated that the Secretary of Health and Human Services (HHS) establish performance standards such as quality control, quality assurance, and personnel regulations. HCFA testing requirements for total cholesterol, authored by CDC, stipulate a +10 percent criterion for acceptable performance for proficiency testing purposes. HCFA testing requirements for HDL cholesterol for acceptable performance on proficiency testing specimens is +30 percent of the established target value. Although the NCEP guidelines advocate multiple measurements, there has been concern by some researchers that, in practice, physicians may not take measurement variability into account when making treatment decisions about cholesterol. Given that the NCEP classification levels for cholesterol are relatively narrow and that the average cholesterol levels for the U.S. population are in the borderline-high category at about 205 mg/dL, there is potential that patients can be misclassified. That is, measurement errors can lead to individuals with “true” levels below the high cutpoint of 240 mg/dL for total cholesterol or 160 mg/dL for LDL cholesterol being put on treatment (termed a false positive) or conversely those with “true” levels above the cutpoints not being treated (a false negative). In discussions with the requester, we agreed to focus our review of cholesterol measurement on the following evaluation questions:1. How is cholesterol measured? (See chapter 2.) 2. What is known about the accuracy and precision of cholesterol measurement techniques? (See chapter 3.) 3. What factors influence cholesterol levels? (See chapter 4.) 4. What is the potential effect of uncertain measurement? (See chapter 5.) To answer these questions, we identified and reviewed relevant scientific literature published mainly since 1988 and synthesized data across studies to address these questions. We selected this period because it covered the time since the first NCEP cholesterol measurement goals were issued, permitting a benchmark by which later testing could be judged. We conducted our bibliographic search using on-line data bases of medical literature. Other sources included articles recommended by experts in the field and the bibliographies of articles published in medical and related research journals. We identified and reviewed approximately 125 books and articles relevant to cholesterol measurement in this manner. We supplemented our review of the medical literature with interviews with a range of individuals who have expertise in the field. These included government agency officials involved with cholesterol measurement and testing issues at CDC, the Food and Drug Administration (FDA), HCFA, NIH, and the National Institute of Standards and Technology (NIST). We also interviewed manufacturers of analyzers in private industry, university researchers, and representatives of organizations that conduct proficiency testing for laboratories. In order to have a better understanding of the testing process, we visited a major hospital laboratory facility to discuss quality control issues and challenges facing practitioners. We also visited a major manufacturer of analyzers to learn more about the production process (quality control procedures, analyzer calibration, potential sources of inaccuracy) as well as industry concerns about the accuracy and precision of cholesterol testing. We did not, however, independently evaluate laboratory performance in any of the different settings where cholesterol tests are conducted across the country. In this chapter, we answer the first evaluation question: How is cholesterol measured? The discussion begins with an overview of cholesterol’s role in the body and analyzes how total cholesterol, HDL, LDL, apolipoproteins, and triglycerides are measured, focusing on laboratory techniques. We also describe the range of settings where cholesterol testing is done and review the type of analyzers for sale in the U.S. market. Cholesterol measurement focuses mainly on determining levels of total, HDL, and LDL cholesterol. Triglyceride levels are also included in lipid profiles. Cholesterol is commonly tested in a variety of settings ranging from large health fairs to more specialized clinical laboratories. No national data are available on the number of laboratories that conduct cholesterol tests, the number of cholesterol testing devices in use in laboratories, or the number of such tests that are done each year. The universe of U.S. laboratories that conduct different types of medical tests is large, however, with some 154,403 having registered with HCFA by October 1993. While HCFA data indicate that physicians’ offices predominate in the testing arena, the distribution of cholesterol tests is not ascertainable from these data. Test results may be less accurate from such settings because of the type of devices used and less staff expertise in conducting tests. In addition to the broad range of settings where measurements are conducted, a large number of analyzers on the market measure cholesterol (45 manufacturers make 166 test systems that measure total cholesterol). Because some of these analyzers are used with different chemical formulations to conduct cholesterol tests, standardizing measurements is a complex task (a topic taken up in chapter 3). A related measurement issue is the use of enzymatic materials in cholesterol analyzers. While enzymatic materials have permitted improvements in ease of use, they are difficult to characterize chemically because they may deteriorate or vary with time, introducing potential measurement inaccuracy. While considerable attention has been given to the negative consequences of elevated total and LDL cholesterol levels, cholesterol is essential to body processes, affecting the production of steroid hormones and bile acids as well as being a structural component of cellular membranes.Cholesterol is a fat-like substance (lipid) manufactured by the body and is also ingested directly through foods such as eggs, which contain cholesterol. In addition, certain saturated fats raise the blood cholesterol level more than any other nutrient component in the diet. If you eat a “standard” American diet, two thirds of the cholesterol in your body is manufactured by your cells—the remainder is derived from your diet. Thus, an elevated cholesterol level may be the result of a diet heavy in saturated fat and cholesterol; it is also possible that the liver is manufacturing high levels of cholesterol and triglyceride or that cholesterol is being removed too slowly from the body. Cholesterol is transported in blood plasma through lipoproteins. The three major classes of lipoproteins include LDL (containing 60 to 70 percent of the total serum cholesterol), HDL (containing 20 to 30 percent of the total serum cholesterol), and VLDL (very low density lipoproteins, which are precursors of LDL and contain 10 to 15 percent of the total serum cholesterol). Triglycerides are also an important lipid in the blood and are usually measured in conjunction with cholesterol values. More recently, increased scientific attention has been given to the apolipoprotein “families,” the subcomponents that make up these types of cholesterol, because they may be better predictors of certain risks associated with coronary heart disease such as degenerative changes in arterial walls. At present, however, research on this topic continues to be developed and tests for measuring apolipoproteins cannot be done in most laboratories. Of the different cholesterol types, total cholesterol is the best understood and documented, in large part because of work done at NIST and CDC to standardize measurement techniques (see chapter 3). In general laboratory practice, total cholesterol measurement is commonly accomplished by several different enzymatic methods using a variety of reagent materials.The various procedures used make standardization of technique across different reagents and instrument configurations difficult. HDL cholesterol, sometimes referred to as the “good” cholesterol, has become recognized as an important coronary heart disease risk factor. HDL is the smallest in size of the lipoproteins and its major subcomponents are apo AI, or apolipoproteins AI, and apo AII. Because a validated reference method has not been developed for HDL measurement, a patient specimen comparison with CDC’s procedure is considered the best means to assess accuracy. HDL cholesterol is difficult to measure accurately, however, and current criteria under the Clinical Laboratory Improvement Amendments of 1988 for acceptable laboratory performance are that a sample must be +30 percent of a test target value, a relatively broad range even with lower HDL values. CDC officials we interviewed pointed out that considerable scientific work remains before HDL measurement accuracy is as well understood as total cholesterol currently is. This would include developing accurate reference materials that could be used to evaluate how well analyzers are measuring HDL cholesterol. Low-density lipoprotein cholesterol, sometimes referred to as the “bad” cholesterol, is considered to be the principal fraction that causes plaque to build up on arterial walls. No error standards for LDL cholesterol measurement have been established under the Clinical Laboratory Improvement Amendments of 1988 or NCEP, although NCEP expects to issue such standards shortly. Direct measurement of LDL cholesterol can be accomplished through ultracentrifugation methods; however, such methods are expensive and time consuming to conduct and therefore not generally available in most cholesterol test settings. In practice, LDL cholesterol is calculated from other laboratory measurements using the Friedewald formula: LDL = total cholesterol − HDL − (triglycerides/5). Among the several limitations to the Friedewald formula is that a patient should be fasting when the specimen is taken. The formula cannot be used for individuals with extremely high triglyceride levels (400 mg/dL and above) and several rare lipid conditions. The most crucial constraint related to the Friedewald formula is that because it relies heavily on the accuracy and precision of total cholesterol, HDL, and total triglycerides, potential measurement error is compounded. Triglyceride levels are usually measured along with lipoprotein levels because they are considered an important health indicator for certain diseases, including coronary heart disease in some patients. Triglycerides are also important to measure because they are used to calculate LDL cholesterol with the Friedewald equation. Enzymatic methods are used to analyze triglyceride levels, although the calibration of such methods is not linked to a validated definitive or reference method. As with HDL and LDL cholesterol, the CDC method (in this case, a chemical chromatropic acid method) is considered the best means for comparing accuracy. Current criteria under the 1988 amendments for acceptable laboratory performance are that a sample must be +25 percent of a proficiency test target value. As analytical capabilities have increased, attention has also turned to the apolipoproteins, which make up HDL and LDL cholesterol. This interest is linked to finding other relevant markers for coronary heart disease risk. For example, recent research has focused on apolipoprotein B-100 (apo B), which is an integral component of four major lipoproteins—LDL, VLDL, intermediate density lipoprotein, and lipoprotein(a)—and apo AI, the major protein component of HDL. At present, several assay methods are available to measure different apolipoprotein components; however, these methods have not yet been standardized. Another practical difficulty in using these apolipoproteins is that a comprehensive, statistically sound study has not yet been undertaken that can be used as a comparative reference base. One issue addressed in the medical literature concerns combining cholesterol levels to determine a ratio that is used to evaluate a patient’s risk of developing coronary heart disease—for example, a total cholesterol or LDL to HDL ratio. In some instances, individuals are advised to achieve a specific ratio as an indicator of an acceptable cholesterol level. Such ratios have been useful estimators of coronary heart disease risk in some population studies; however, NCEP emphasizes that HDL and LDL cholesterol levels are independent risk factors with different determinants and should not be combined for clinical decisionmaking. Widespread awareness of elevated total cholesterol levels as a potential coronary heart disease risk factor has led to patient testing in a variety of settings. These range from traditional clinical settings (hospitals, physician office laboratories) to mass screenings (such as health fairs). No national data are available on the number of laboratories that conduct cholesterol tests, the number of cholesterol testing devices in use in laboratories, or the number of such tests that are done each year. The Clinical Laboratory Improvement Amendments of 1988 changed federal regulation of laboratories and expanded federal oversight to virtually all testing laboratories in the nation. The amendments required all laboratories to register with HCFA and established testing and quality control standards, including provisions for conducting inspections to ensure that laboratories are maintaining proper controls and records. In implementing the act, the Secretary of HHS established three categories of laboratory tests: (1) simple tests, (2) tests of moderate complexity, and (3) tests of high complexity. Waivers are given to laboratories that conduct only simple tests such as dipstick or tablet reagent urinalysis. Cholesterol tests are in the moderate complexity group, meaning that laboratories that perform such tests should comply with regulations under the amendments for personnel standards, quality control, and proficiency testing (these tests evaluate accuracy and precision using quality control materials). As of October 1993, 154,403 laboratory facilities in the United States had registered with HCFA. HCFA officials estimate that there may be as many as 50,000 additional laboratories that should have registered with HCFA but have not, making it impossible to determine the universe of such facilities. Table 2.1 categorizes laboratories that had registered with HCFA. Of the registered laboratories, the majority, 90,673 (58.7 percent), are located in physicians’ offices. Oversight of the laboratories listed in table 2.1 varies, depending on the level of tests performed and several factors. A large number, 67,000, conduct only tests that are not medically complex and are therefore exempt from regulation; 6,500 are accredited by a state agency; 24,000 are accredited by nongovernment proficiency testing groups; 16,000 conduct microscopic tests under HCFA oversight. HCFA coordinates biannual, on-site inspections by state agencies and HCFA regional office laboratory consultants for the remaining 41,000 laboratories. HCFA expects to have 180 state agency surveyors nationwide who will work under 10 different HCFA regional offices. On-site inspections will consist of examining a sample of laboratory tests based on volume, specialties, clients, and the number of shifts over which equipment is used. In their inspections, surveyors will look at the following five areas: patient test management and organization, results of proficiency tests, personnel qualifications, quality assurance procedures, and use of daily quality controls. HCFA staff began laboratory inspections under the 1988 amendments in September 1992 and they hope to have the first cycle of visits and certifications completed by March 1995. The initial emphasis of inspections has been to educate and inform laboratory personnel about pertinent regulations. HCFA staff responsible for overseeing laboratory inspections stated that of the 6,200 survey visits that had been made by August 1993, 500 laboratories were found to have major deficiencies (the nature of these problems was not specified). HCFA survey and certification officials and NCEP have expressed concern that cholesterol testing in physicians’ offices or screening settings may differ from that done in clinical and research settings. For instance, clinical laboratories or hospitals may be more likely to have well-established quality control programs and large analyzers while physicians’ offices or health fairs may be limited to less reliable desk-top analyzers and less expertise in conducting tests and maintaining analyzers (see the discussion of analyzer types in chapter 3). HCFA staff stated that physicians’ offices often send specimens for HDL and LDL tests to larger laboratories, which have the capability to do these tests. While enforcement under the 1988 amendments is relatively new, each of the groups most affected—HCFA, laboratory personnel, and proficiency testing service providers—views it differently. HCFA officials noted from their experience overseeing laboratories that the traditionally unregulated segment of the medical testing market, physicians’ office laboratories, sees the regulations as a burden and an added cost. In contrast, laboratories that have maintained a high-quality testing program believe the regulations represent minimum standards for running a quality testing program. Proficiency testing service providers have had to confront problems with the quality control materials they use to assess and transfer accuracy among laboratories, attempting to balance the limits of these materials with how they are used to judge laboratory performance. All agree, however, that meeting the standards adds to the cost of testing. There are also several types of nonmedical settings in which testing is routinely undertaken: health fairs, shopping malls, and the workplace. In some cases, only a small amount of blood taken from the finger (capillary source) is used to conduct such a cholesterol analysis. These testing environments are subject to a variety of potential problems, however: poorly trained personnel taking samples, inappropriate patient preparation, incorrect specimen collection, or improperly calibrated analyzers. There is also concern that in nonmedical settings individuals may be given test results without proper interpretation. An additional concern is that those who need to be referred for further medical consultation or a more detailed cholesterol profile may not receive that advice. FDA reviews and clears diagnostic devices, including those that measure cholesterol. Following section 510(k) of the Food, Drug, and Cosmetic Act of 1976, device manufacturers must notify FDA that they intend to market a device. FDA then determines whether the device is accurate, safe, effective, and substantially equivalent to a legally marketed “predicate” device—that is, one that was on the market when this law was passed. If the agency determines that a device does not meet 510(k) guidelines and deems it not substantially equivalent, then it must be reviewed as a new product. According to agency officials, FDA’s review of cholesterol measurement devices takes into consideration information provided by manufacturers on intended use, test type and methodologies, performance characteristics (derived from actual assays), analytical performance for 40 normal and 40 abnormal specimens across the range of cholesterol levels, and label wording (intended use statement and conditions). FDA officials indicated that the agency requests that cholesterol device manufacturers compare their analyzers to the accuracy and precision methods of the National Reference Method Laboratory Network for total cholesterol measurement (see chapter 3). However, FDA does not formally require that analyzers be “traceable” to this method because there are devices on the market that have not established “traceability” to the reference method (traceability refers to the ability of a device to closely duplicate the accuracy attained by the reference method). CDC has compiled a list of total and HDL cholesterol analyzers currently in use. As of April 1994, there were 166 test systems (made by 45 different manufacturers) available to measure total cholesterol. For HDL cholesterol, 143 test systems, made by 41 manufacturers, have been identified. (Some manufacturers have as many as 11 “systems” that use the same technology.) FDA-cleared cholesterol analyzers encompass three types of devices: large stationary analyzers used in clinical laboratories, desk-top analyzers, and home test kit analyzers. Desk-top analyzers can be used in a variety of settings (medical and nonmedical) to provide relatively quick cholesterol test results, whereas large analyzers are capable of performing multiple tests on many analyses for hundreds of specimens a day. The latter are usually found in large independent laboratories, hospital laboratories, and the offices of major testing organizations that serve the medical community. The third type is a home test kit, designed for sale directly to consumers. Currently, one device, the AccuMeter (manufactured by ChemTrak) is being marketed in the United States. The approval of this device has been somewhat controversial in the clinical chemistry field because of concerns about the reliability of its measurements. Apart from possible technical problems is the related issue of whether a person may incorrectly interpret his or her cholesterol level after using the device or initiate a self-treatment program without proper medical feedback and monitoring. Cholesterol analyzers currently on the market primarily use enzymatic methods, high-technology equipment, and computerized data processing systems. Enzymatic methods offer advantages over older chemicals because they are safer and can be used in an automated laboratory environment, both distinct improvements. Nonetheless, enzymatic methods are also considered to be difficult to characterize chemically, thus adding to the uncertainty of tests done with them. FDA draft guidelines for approving cholesterol testing devices note that because enzymatic materials may deteriorate or vary, analyses done with them may be imprecise. A related concern noted by HCFA officials is that each analyzer and reagent combination has its own “method” for measuring cholesterol, making it difficult to assess accuracy using standardized testing materials. An additional perspective on these devices was provided by a hospital laboratory administrator who observed that the devices used in his laboratory are self-contained “black boxes” that rely heavily on computer technology that must be regularly calibrated as part of a routine quality control process. Unlike the older instruments these have replaced, he noted, these newer devices are easier to use than the older systems. However, their complexity also means that it is hard to determine whether something may be wrong inside the device. In this chapter, we answer the second evaluation question: What is known about the accuracy and precision of cholesterol measurement techniques? The discussion first focuses on national accuracy goals and efforts to standardize cholesterol measures. This is followed by an analysis of recently published literature that compares test results from different settings. Standards for cholesterol testing have evolved from the late 1980’s, when NCEP first established the goal that total cholesterol measures should be accurate within +14.2 percent. By 1992, NCEP lowered its total cholesterol measurement goal to +8.9 percent. HCFA established a similar total cholesterol goal (+10 percent) as well as the goal that HDL cholesterol tests should be within +30 percent of its correct value, when judged by quality control testing. To date, an LDL cholesterol measurement goal has not been established, although one is expected soon. Evaluating the extent to which laboratories across the country are providing medical personnel and patients with accurate total cholesterol test results is difficult. While an accepted national reference system exists, and network laboratories can provide traceability to an accuracy standard, participation by laboratories has been limited particularly to clinical and research settings. Additional information is collected through proficiency testing surveys that indicate that laboratory precision has improved over time but, again, the number of participating laboratories is small. The lack of information on accuracy in actual laboratory settings makes it impossible to know whether the goals established for total and HDL cholesterol measurement are being met and how well LDL cholesterol is being measured. Because these test results are key to making treatment decisions in NCEP guidelines, such data are arguably important. Two collaborative research efforts, one by the College of American Pathologists (CAP) and CDC and the other by Veterans Affairs (VA) and CDC, highlight weaknesses of the current system of monitoring cholesterol laboratory tests. The reliance on processed quality control materials for evaluating analyzer accuracy was found to be problematic because of what are termed matrix effects. Processed materials tend to act differently from fresh serum samples on many instrument reagent systems and produce different test results. The studies found that total cholesterol tests done on fresh serum samples in a select group of clinical settings met NCEP accuracy standards whereas with processed control materials, there was greater inaccuracy. Finding ways to address matrix problems is important because processed control materials are key to assessing accuracy across laboratories and serve as the basis for enforcing the Clinical Laboratory Improvement Amendments of 1988. With regard to desk-top analyzers, there are sufficient concerns about the reported accuracy and precision of total, HDL, and LDL results provided by several devices, even when tested under optimal operating conditions, to warrant further scrutiny of their performance. Consumers should be aware of the potential uncertainty associated with test results produced by these devices, particularly in screening settings. Several studies we reviewed found misclassification rates ranging from 17 to nearly 50 percent. One new development in the cholesterol testing arena are home test devices, which measure total cholesterol. While these may prove to be useful, questions about their precision and accuracy should not be overlooked—particularly in light of their direct availability to consumers. Broader concerns about how individuals may interpret results and what they might do with that information in terms of failing to seek out appropriate medical consultation and possible treatment are too important to be ignored. NCEP’s 1990 report, Recommendations for Improving Cholesterol Measurement, established performance goals for assessing the accuracy of individual laboratory testing programs. The report recommended that by 1992 the total error associated with a single serum total cholesterol measurement should be within +8.9 percent (0.05, 2-tailed test). Total error is defined in terms of two main measurement components: bias and precision. Bias is the extent to which a series of test results deviate from the “true” value, within acceptable limits (<3 percent according to NCEP), whereas precision refers to the consistency and reliability of repeated results within acceptable limits (<3 percent according to NCEP). A cholesterol analyzer, for example, could be very precise yet inaccurate because of the poor calibration of an analyzer or the deterioration of the reagents being used. The difference between bias and precision can be illustrated with the following brief example. Suppose that a total cholesterol specimen whose “true” value is 200 mg/dL were tested 10 times on the same analyzer. If the analyzer gave a reading of 220 mg/dL each time it tested the specimen, the analysis would be biased—that is, it would be 10 percent over the “true” value. However, the analysis would be precise in that it consistently gave the same result when testing the specimen—the precision error would be zero. While test results need to be unbiased and precise, there is the question of how accurate a test need be at particular cholesterol levels. It has been suggested that greater variability may be acceptable at levels well above or below the NCEP cutpoints—for example, at total cholesterol readings of 160 mg/dL or as high as 350 mg/dL. Arguably, accuracy becomes more important near the 240 mg/dL cutpoint than at 350 mg/dL, where there is less doubt about a patient’s risk category. The National Reference System for Cholesterol (NRS/CHOL) grew out of work undertaken by the National Committee for Clinical Laboratory Standards in 1977 to establish an accuracy base for cholesterol testing.Rather than requiring that all laboratories use the same analyzers and methods to achieve standardization, emphasis is given to having test results traceable to an accepted accuracy standard. NRS/CHOL consists of a hierarchy of approved methods and materials used to assess cholesterol measurement accuracy. These include basic measurement units and definitive methods (NIST), primary reference materials (NIST), reference methods (CDC), secondary reference materials (NIST and CDC), field methods, and patients’ results. These are integrated into an accuracy base that can be transferred through a national laboratory network to device manufacturers and the broad range of laboratories where cholesterol is measured. One component of NRS/CHOL involves expertise at NIST, where the definitive method for measuring total cholesterol was developed. The definitive method assigns the “true” value to a specimen through a process in which all potential sources of inaccuracy and interference are evaluated. The definitive method uses an isotope dilution mass spectrometric technique. Because it requires special equipment and costly materials and is time-consuming, the definitive method is not considered transferable to clinical laboratories. This method is also used for the highly specialized purpose of developing and testing standard reference materials that are used by manufacturers and in other settings such as research lipid laboratories. CDC oversees another piece of NRS/CHOL: it uses what is termed the modified Abell, Levy, Brodie, and Kendall (abbreviated Abell-Kendall) reference method for total cholesterol measurement. When the reference and definitive methods have both been used to test the same samples, the reference method’s results have been shown to be about 1.5-percent higher than those of the definitive method. CDC disseminates the reference method through the National Reference Method Laboratory Network for Cholesterol Standardization. This includes nine laboratories located throughout the United States and four overseas. Because the reference method is expensive and labor intensive, it is not considered practical for use in most clinical laboratories. Consequently, it is used primarily by research laboratories and manufacturers, two settings in which closer traceability to the definitive method is essential. The network provides a support system that would permit a laboratory or manufacturer to gauge its total cholesterol test accuracy and standardize its measurements. This can be done by splitting samples with a network laboratory and comparing results. Participation in the network is relatively low when one takes into consideration the number of laboratories in the United States. In 1991, for example, 170 laboratories applied for a certificate of traceability and 58 percent passed (if a laboratory fails, it can reapply for certification). In 1992, 167 laboratories applied for a certificate of traceability and 79 percent passed. Although participation is low, CDC officials estimate that 95 percent of the types of instrument systems most common in U.S. laboratories have been certified through the reference network as meeting NCEP standards mentioned earlier in this chapter (a list of these analyzers is published in Clinical Chemistry News). CDC representatives caution that the reference laboratories test only an analytical system for potential to meet these standards. Day-to-day consistency in a laboratory requires rigorous quality controls that help ensure that an analyzer will perform as it is capable of performing. In other words, if an analyzer is not maintained properly, it will not provide results that are constantly accurate. Another way in which NRS/CHOL attempts to transfer accuracy to laboratories is through “quality control” substances called standard reference materials, which are the link between the definitive and reference methods and manufacturers of analyzers and reagent systems. These include CDC and NIST-produced and certified materials (which are made in stabilized, frozen, or lyophilized, or freeze-dried, forms) that are assigned target values for cholesterol using the reference or definitive methods. Proficiency testing (outside surveillance) services have an increasingly important role in efforts to achieve accuracy and standardization of clinical laboratory tests because they provide the basis for interlaboratory comparison of test results and accuracy across analyzers. Proficiency testing programs send quality control materials that participating laboratories analyze and the results are compared with a target value determined by CDC’s reference method. These test results are divided into peer groupings (by instrument type), permitting laboratory staff to judge how their results compare with laboratories using the same method as well as the CDC reference method result. CAP and the American Association of Bioanalysts are two major groups involved in this work. CAP’s Comprehensive Chemistry Survey has 12,000 subscribers that use its service to evaluate several different clinical chemistry tests. This service is not generally used by smaller laboratories. The American Association of Bioanalysts does similar types of proficiency testing. National trends through 1990 in interlaboratory comparability (that is, the degree to which established test values vary from one laboratory to the next) for total cholesterol are as follows: 1949, 23.7 percent; 1969, 18.5 percent; 1980, 11.1 percent; 1983, 6.4 percent; 1986, 6.2 percent; 1990, 5.5 to 7.2 percent. These data indicate that interlaboratory precision in those clinical laboratories participating in the CAP survey improved considerably from variability of about 24 percent in 1949, to 1983, when it appears to have leveled off at the 6-percent range. These differences between laboratories suggest that method and laboratory-specific biases contributed to overall inconsistency in cholesterol analyses. Another indicator of precision is consistency within individual laboratories. CAP data indicate intralaboratory precision for cholesterol measurements (where participating laboratories analyze the same quality control materials repeatedly over an extended period) improved from 4.1 percent in 1975 to 3.5 percent in 1985. Efforts to achieve standardized, accurate cholesterol measurements through NRS/CHOL and proficiency testing programs have encountered serious problems with the use of quality control (reference) materials. These are termed “matrix” effects and arise when “cholesterol recovered from the control material matrix may not compare with that typically recovered from fresh patient specimens.” This is because the matrix surrounding the cholesterol quality control material interferes with the analysis, causing erroneous results (matrix effects do not arise when analyzing fresh blood samples). This is a function of instrument design, reagent composition, method employed, and the material formulation. Because these quality control materials are key to transferring accuracy and quality control in NRS/CHOL and assessing precision in proficiency testing programs, matrix effects present considerable problems. While most attention has focused on matrix effects in quality control materials used to standardize total cholesterol measures, there is also concern that HDL cholesterol control materials may be subject to these effects. Recent interest in the problems presented by matrix effects is linked to the Clinical Laboratory Improvement Amendments of 1988, which required that proficiency testing be used to evaluate the quality of laboratory results. Matrix problems can make it impossible to assign a target value to quality control material that will apply to all routine testing methods. Industry and academic research efforts are underway to address the measurement problems associated with matrix effects but practical solutions are not yet available. Research has focused on establishing correction factors to account for the matrix error component (derived from comparisons of test results using fresh samples and quality control materials) as well as on developing new analytical systems and quality control materials that can accurately measure both fresh patient and quality control materials. A recently published CAP and CDC collaborative study examined matrix effects on cholesterol tests. A total of 997 laboratories that participate in the CAP survey were selected (selection method was not specified) to analyze both a freshly frozen serum pool and a lyophilized (freeze-dried) CAP chemistry quality control sample simultaneously, permitting comparisons and bias to be calculated. Laboratories that had submitted incomplete data or had results considered to be outliers (defined in this study as a pooled within-run coefficient of variation across three samples that exceeded 10 percent or a within-run bias of any sample of 25 percent or more relative to the reference method value) were excluded from the analysis. Laboratories that participated in the study were drawn from CAP survey participants, which are mainly hospital laboratories. They are, thus, not representative of small independent laboratories such as those found in physicians’ offices, or even hospitals. While the ability to generalize from this study is limited, the authors make several points that have important consequences for cholesterol measurement. The CAP and CDC study classified the cholesterol analysis methods into 37 instrumentation and reagent groups. This figure indicates the range of instruments and reagent combinations that regulators must work with in attempting to achieve standardization. Across this group of instruments, they found that “26 (70%) of 37 methods evaluated had statistically significant calibration bias compared with the reference method. The calibration bias of 13 methods (41%) exceeded the NCEP 3% limit for bias.” When the investigators adjusted the results to compensate for matrix effects, “92% to 93% of adjusted results met the NCEP 8.9% total error goal relative to the reference method due to superior interlaboratory precision of some of the biased methods.” For the fresh-frozen serum sample that was analyzed, test results (N = 900) had a mean bias of 0.1 percent that was nearly identical to the reference method and a coefficient of variation of 4.6 percent, the latter figure slightly exceeding the 1992 NCEP/LSP goal. Thus, 70 percent of enzymatic methods used to measure cholesterol in the CAP and CDC study were subject to matrix effects when testing quality control material. The implication of this for NRS/CHOL is that the use of fresh human samples, such as by splitting samples with a member of the National Reference Method Laboratory Network and comparing results, may be a better means to transfer accuracy than the use of processed quality control materials. Given the number of laboratories in the nation and the limited number of National Reference Method Laboratories, this would be a difficult if not impossible task. Table 3.1 lists the 37 instrument and reagent systems and calibration bias relative to the reference method. A study similar to the CAP and CDC investigation was undertaken in 112 VA laboratories in conjunction with CDC. Because VA has the nation’s largest hospital system, it provides insight into large-scale efforts to standardize cholesterol measurements. Briefly, the VA research group asked participating laboratories to conduct analyses of fresh serum samples and 1990 CAP quality control materials, permitting comparisons of how well instruments analyzed both types of specimens. This study team found “significant matrix-effect biases with the CAP Survey materials in six of the eight major peer groups, despite the fact that accuracy of cholesterol measurements was maintained with fresh serum samples.”The authors concluded that “CAP PT materials used currently do not behave in a manner identical to fresh human serum when measuring cholesterol on many, but not all, analytic systems.” Table 3.2 presents the study findings. The VA study authors noted that the biases that arise from matrix effects will cause incorrect conclusions about the accuracy of laboratory procedures done on fresh patient specimens. Further, matrix effects will “severely hamper interlaboratory accuracy transfer, standardization efforts, and monitoring performance of a laboratory’s testing accuracy . . . .” Cholesterol is often measured with small, portable and semiportable devices called desk-top analyzers, in either a physician’s office or a nontraditional setting such as a health fair. Desk-top systems generally use the same kinds of enzymatic methods employed in laboratory settings. NCEP guidelines do not differentiate between desk-top analyzers and those used in laboratories; all such devices are held to the same overall accuracy standards. A recent study that summarized desk-top analyzers concluded: “In general, desk-top analyzers give fairly accurate measurements on average, but tend to be somewhat more variable than laboratory-based methods in individual samples.” The same article links this difference in part to the use of fingerstick blood samples with these analyzers, the results of which are likely to differ from venous samples. Other factors that contribute to their measurement variability include lack of operator training and use of such devices in field settings where frequent transportation and changes in temperature and humidity can affect test results. We identified 13 recent studies that evaluated desk-top analyzer performance. We discuss several of these studies in this section, focusing on those that permit comparison of data across devices. The first study evaluated five analyzers under tightly controlled conditions: Analyst (DuPont), Ektachem DT-60 (Eastman Kodak), Reflotron (Boehringer-Mannheim Diagnostics), Seralyzer (Ames Division, Miles Laboratories), and Vision (Abbott Laboratories). In terms of accuracy of total cholesterol measurements, the Ektachem DT-60 and the Vision had biases of less than 2.0 percent, within the current NCEP bias goal of <3 percent. The three other instruments had biases ranging from 5.2 percent to 10.4 percent, thus exceeding the NCEP goal (see table 3.3). Only three of the five analyzers tested could conduct HDL and LDL cholesterol analyses. Across the three HDL cholesterol levels tested, Kodak Ektachem DT-60 had results that were approximately 6.0-percent higher than the true value while the Analyst and Vision analyses of the low HDL cholesterol measure were –12.7 percent below and 29.7 percent above the true value, respectively. LDL cholesterol measures, derived with the Friedewald equation, conducted on the Kodak Ektachem DT-60 and Vision had an error of less than 3.0 percent while the Analyst exceeded 17 percent at both LDL cholesterol levels tested. The consequence of such error is that the correct total, HDL, and LDL cholesterol value is systematically over- or underestimated. Data on the precision of these analyzers are presented in table 3.4. Note that the coefficient of variation of the Reflotron and Seralyzer for total cholesterol is 10 percent or higher, exceeding the current NCEP precision goal of <3 percent. Another perspective on the data in the preceding tables is how the results could influence the risk category into which a patient is classified (desirable, borderline-high risk, or high risk for coronary heart disease). Two instruments, the Kodak Ektachem DT-60 and Abbott Vision, correctly classified 95 percent and 94 percent of the total cholesterol specimens, respectively. The Analyst, Reflotron, and Seralyzer correctly classified 74 percent, 83 percent, and 75 percent of patient total cholesterol specimens, respectively. A second study we reviewed that was published in 1993 also evaluated five desk-top devices used to measure cholesterol in screening environments, assessing bias, precision, and patient misclassification error for capillary and venous whole blood and venous plasma. The devices were the Ektachem DT-60 (Kodak), Liposcan (Home Diagnostics), QuickRead (Photest), Reflotron (Boehringer-Mannheim), and Vision (Abbott). The authors concluded that none of these devices met the NCEP performance recommendations regarding bias and precision. Of interest were findings regarding average percentage bias, which differed for capillary and venous whole blood (see table 3.5) and misclassification rates (see table 3.6), which ranged from false negative rates as high as 37 to 48 percent for the Liposcan to false positives up to 38 and 34 percent for the QuickRead. Misclassification into false positive categories was 18 percent for the Vision, 14 percent for the Reflotron, and 7 percent for the Ektachem DT-60. Home test kits to measure total cholesterol have also been cleared by FDA and have recently begun to be marketed directly to consumers (they have been available to physicians since 1991). Total cholesterol results obtained with the AccuMeter, currently the only such device being marketed in the United States, for 100 patients were compared with a CDC-standardized laboratory at the Medical College of Virginia. While AccuMeter’s results met NCEP guidelines for measurement bias (<3 percent) for both capillary and venous blood when using a mean bias measure, these researchers found that mean absolute percentage bias was 5.7 percent and 5.2 percent, respectively. In addition, 18 to 20 percent of samples fell outside the +8.9 percent of the reference result, the level the NCEP established for acceptable total error for single cholesterol measurements. Figures for precision, from 40 total cholesterol assays done in duplicate from three pools of human serum with mean concentrations of 182 mg/dL, 223 mg/dL, and 266 mg/dL, exceeded NCEP/LSP guidelines (<3 percent precision error); the coefficients of variation were 4.5 percent, 5.4 percent, and 5.8 percent, respectively. The authors noted that approximately 5 percent of the devices did not function properly and could not provide a cholesterol reading. We met with FDA officials to discuss their decision to permit marketing of the AccuMeter under 510(k) regulations. They explained that it met the criteria of “substantial equivalence” to an analyzer currently being marketed, therefore complying with existing regulations, although the device does not meet NCEP standards for precision and accuracy (as judged by “traceability” to the Abell-Kendall reference method). Even if a single cholesterol measurement were analytically accurate and precise, it would not reflect how a person’s cholesterol can vary from day to day. Total, HDL, and LDL cholesterol levels vary over time and are influenced by what are termed preanalytic or biological factors that include behavioral (exercise, diet, alcohol consumption), clinical (disease, pregnancy), and sample collection conditions. In this chapter, we answer our third evaluation question: What factors influence cholesterol levels? Scientific literature indicates that some variation and fluctuation of an individual’s total, HDL, and LDL cholesterol is normal and to be expected. For instance, in some individuals, week-to-week fluctuations can be dramatic while in others virtually no change may occur over the same time period. Overall, biological variation of total cholesterol is reported to average 6.1 percent; HDL cholesterol variation averages 7.4 percent; LDL biological variability, 9.5 percent; triglycerides, 22.6 percent. These findings suggest that variation in cholesterol levels is normal and, for some individuals, can be quite pronounced. The implication for testing, particularly for patients near a cutpoint (such as 240 mg/dL) is that repeated measurements may be necessary. In light of measurement uncertainty for HDL and LDL, multiple measures of these subfractions may be warranted, particularly before making a diagnosis. Other factors—diet, exercise, alcohol intake—appear to have differing effects on individuals’ cholesterol levels. The amount of the effect varies depending on the amount and duration of intake and physiological factors. In some, it may not have a large effect on total and LDL cholesterol levels. This may be partially related to the estimates that one third of an individual’s cholesterol level is linked to diet while the body produces the remaining two thirds. The evidence regarding regular exercise points to the benefits associated with such activity, as measured by changes in lipid levels. While alcohol intake can have a positive effect on cholesterol levels, consumption must be balanced with the potential risks associated with it. The potential effect of diet on cholesterol levels was noted in the 1990 NCEP report on cholesterol measurement, which recommended that patients maintain their usual diet and that their weight be stable for at least 2 weeks before their cholesterol level is measured. Clinical factors such as disease, pregnancy, and some medications (diuretics, beta blockers, oral contraceptives) can also alter cholesterol levels. How a blood specimen is taken can also have a crucial role in cholesterol analysis. Some research has found that fingerstick (capillary) samples differed markedly from venous samples when analyzed by the same device while other researchers have called for more standardized specimen collection techniques. Cholesterol levels within a person vary over time, depending on a number of factors. As discussed in chapter 1, for example, as people age, their total cholesterol level tends to increase. However, cholesterol levels can also vary considerably between measurements because of what is termed intra-individual biological variability—that is, normal fluctuations in cholesterol levels are estimated to account for about 65 percent of the total intra-individual variation for both total and HDL cholesterol and about 95 percent of the variation for triglycerides. Studies have linked other types of biological variation to diet, alcohol intake, smoking, and physical activity. The body of literature on this subject is large: a 1992 article reviewed more than 300 publications, most of which had been published within the previous 5 years. A recent statistical synthesis of findings from 30 studies published between 1970 and 1992 provides considerable information on intra-individual biological variation—that is, the normal fluctuation in cholesterol levels referred to above. According to this review, total cholesterol is the most stable lipid, with the day-to-day biological variation averaging 6.1 percent; variations in HDL cholesterol concentrations, 7.4 percent; LDL biological variability, 9.5 percent; triglyceride, 22.6 percent. The number of subjects in the selected studies of total cholesterol variability ranged from small (less than 20) to quite large (14,600). Not surprisingly, the number of specimens and the sampling intervals varied as well. Two large studies analyzed two specimens, taken 1 month apart, while another study with a smaller number of subjects analyzed specimens taken twice a week for 10 weeks. Results for HDL variability were based on 16 studies, triglyceride variability 19 studies, and LDL variation 10 studies. Two recent articles have also reported similar findings. One study compared total cholesterol and HDL measurements taken from 40 male subjects 1 week apart. The authors found a relatively wide range of variability in some patients: one patient’s total cholesterol declined dramatically from one week to the next, dropping from nearly 300 mg/dL to just over 220 mg/dL, while several others’ total cholesterol level scarcely moved between the two tests (the coefficient of variation for single measurements was 6.8 percent for total cholesterol and 10.5 percent for HDL, slightly higher than the figures from the statistical synthesis reported earlier). Another study of cholesterol variability tracked 20 subjects 22 to 63 years old measuring their total, LDL, and HDL cholesterol weekly for 4 weeks. The authors found variations of more than +20 percent in the serum levels of total cholesterol, LDL, and HDL in 75 percent, 95 percent, and 65 percent of the subjects, respectively. More important, 40 percent moved in or out of one of the risk categories, and 10 percent moved two categories—from “desirable” to “high risk.” Other research has found that LDL and total cholesterol levels within individuals vary by season, both averaging 2.5-percent higher in the winter than the summer. The HDL cholesterol level, however, has not been found to vary seasonally. Women are affected by another aspect of biological variability; total cholesterol concentrations may average 20 percent lower during the luteal phase (the period immediately after ovulation) of the menstrual cycle. Cholesterol levels vary because of behavioral factors, and some of this variability can influence short-term measurements. For example, strenuous exercise 24 hours prior to having a blood specimen taken can elevate an individual’s HDL cholesterol level. Likewise, moderate alcohol consumption can increase HDL and decrease LDL cholesterol levels. Behavior over longer periods of time can also affect cholesterol levels—diet, alcohol consumption, exercise. The relevance to the measurement theme of this report is that there is more to variation in cholesterol levels than inaccurate laboratory tests. Consumption of certain saturated fatty acids and, to a lesser extent, cholesterol is linked to higher serum LDL cholesterol values. In terms of diet, an increase in cholesterol intake of about 100 mg (per 4,200 joules) raises plasma cholesterol by about 10 mg/dL. Progressively higher cholesterol intakes exceeding 500 mg appear to have smaller incremental effects on cholesterol levels. The same study points out that dietary cholesterol is incompletely and variably absorbed by individuals, ranging from 18 to 75 percent. Further, people with the highest LDL cholesterol levels appear to have the highest percentage of absorption of dietary cholesterol. “blood cholesterol responses of individuals differ substantially in response to changes in dietary lipids . . . . For the same increase in dietary cholesterol or saturated fat, the cholesterol levels of most persons will increase, but some will remain essentially unchanged and a few will increase dramatically.” As noted in chapter 2, only one third of an individual’s cholesterol is derived from diet and the remaining two thirds are manufactured by the liver. In terms of the contribution that diet can make to cholesterol reduction, the 1993 NCEP guidelines state that men who follow the step I diet could expect their total cholesterol level to be reduced 5 to 7 percent while those who follow the more restrictive step II could expect an 8-to-14 percent reduction. These estimates are based on models derived from metabolic ward studies (done on institutionalized patients), which closely monitored and controlled individuals’ adherence to their diet. Some researchers have noted that such reductions can be difficult to achieve in a “free living” population. Published epidemiological studies have demonstrated a relationship between alcohol intake and changes in cholesterol profiles. The amount of change attributed to alcohol depends on the amount consumed, individual susceptibility, genetic variables, and diet. Moderate alcohol intake (defined as several drinks a day) appears to increase HDL cholesterol and may be associated with reduced risk of coronary heart disease. Greater alcohol consumption is also associated with a lowering of LDL cholesterol and an increase in triglycerides. In one study, it is estimated that 4 to 6 percent of the variance of HDL cholesterol levels in the population may be linked to alcohol consumption. Exercise has been shown to influence cholesterol levels and has received increased attention as having a preventive effect on coronary heart disease. Researchers have found that exercise that is strenuous and promotes endurance causes LDL, triglycerides, and apo B to decrease while raising HDL and apo AI levels. Other evidence regarding exercise points to the benefits of brisk walking. One study found that previously sedentary women who walked an average of 155 minutes per week decreased their total cholesterol level by 6.5 percent compared with a decrease of 2.2 percent in control subjects, and the HDL level of walkers increased 27 percent, compared with a 2-percent increase in controls. A recent article suggests that the effect of these changes depends on the volume, intensity, and type of exercise undertaken, a slight variation on earlier work. Apart from longer-term effects, acute exercise also causes a significant rise in HDL levels such that it is recommended that patients avoid any strenuous exercise 24 hours prior to having a blood specimen taken. Obese individuals have been found to have higher total and LDL cholesterol and triglyceride levels and lower HDL cholesterol when compared to nonobese members of control groups. When an obese individual loses weight, a decline in triglyceride level occurs (about 40 percent); total and LDL cholesterol levels are found to decline about 10 percent while the HDL level increases about 10 percent. The implication for cholesterol measurements, particularly for obese individuals who repeatedly gain and lose weight, is that such fluctuations can be the source of significant variation in lipoprotein levels. In fact, NCEP/LSP recommended that an individual’s weight be stable and that he or she maintain his or her usual diet for at least 2 weeks prior to having cholesterol measured. A person’s cholesterol profile can be affected by acute, infectious, and metabolic diseases, and some types of medications have been linked with elevated levels in some patient groups. Several conditions are associated with increased cholesterol levels. Diabetes mellitus and hypothyroidism are cited as the most common of these, with total cholesterol and LDL cholesterol levels being elevated in 30 percent of the patients with the latter condition. Patients with diabetes mellitus sometimes have elevated triglycerides, and higher levels of insulin are positively associated with unfavorable levels of total and LDL cholesterol, triglycerides, apo B, and blood pressure, and negatively with HDL cholesterol components. Acute myocardial infarction is associated with decreases in levels of total cholesterol, LDL, apo AI, and apo B. Indeed, lipid levels after a heart attack are affected to such a degree that it is recommended that blood specimens be obtained within 24 hours of the event; if they cannot be taken within 24 hours, then they should not be taken for 3 months because the test will not accurately reflect the patient’s usual lipid level. Other diseases such as Tay-Sachs, rheumatoid arthritis, and infections can also alter lipid profiles. In addition, familial hypercholesterolemia and other related disorders are associated with increased blood cholesterol levels. Medication can also alter lipid levels. Diuretics, some beta blockers, and sex steroids have been cited as changing lipid levels. Oral contraceptives high in progestin can increase serum total and LDL cholesterol and decrease HDL cholesterol levels, while contraceptives with high estrogen content can cause opposite changes. Similar changes have been found in postmenopausal women taking estrogen supplements. Pregnancy is associated with changes in lipid profiles in the second and third trimesters, when total and LDL cholesterol, triglyceride, apo AI, apo AII, and apo B are significantly increased. Because of these changes, lipid levels are affected to the degree that testing is not recommended until 3 months postpartum or 3 months following cessation of lactation. How a blood specimen is collected and handled may affect lipid levels. For example, blood cholesterol samples are often drawn when the patient is in a fasting state, particularly when a lipid profile is to be taken. This is because eating a typical fat-containing meal causes a patient’s lipid profile to change, an effect that lasts about 9 hours. Typically, triglyceride levels increase as does very-low-density lipoprotein (VLDL), while LDL cholesterol falls significantly. “the most reliable screening measurements were obtained when the analyses were performed in venous plasma samples by a qualified clinical laboratory. . . . The most-variable measurements were obtained with the capillary samples, and these measurements seemed to be most prone to misclassification overall.” A 1993 article briefly discusses the difference between venous and capillary samples, pointing out “contradictory results” (that is, some studies reporting either higher or lower capillary results than venous results, depending on the various procedures and devices tested) and a lack of consensus in the literature about such differences. The study’s authors conclude that “capillary collection technique is critical and must be standardized to obtain reliable cholesterol results.” How the specimen is taken and prepared for analysis also can affect lipid level measurements. Here factors such as the knowledge and experience of the laboratory technician are important. For example, the length of time a person is sitting or standing prior to having the specimen taken has been demonstrated to influence cholesterol test results. Patients should remain seated for at least 15 minutes before a venous sample is taken, and if a tourniquet is used, it should be applied for less than 1 minute before the specimen used for a lipid analysis is taken. Proper storage of samples is also important to avoid changes in the composition of samples and to ensure accurate measurement results. Use of a standard collection policy by trained laboratory technicians can help minimize variability associated with these factors. In this chapter, we discuss the study’s fourth evaluation question: What is the potential effect of uncertain measurement? This is followed by our conclusions and discussion of agency comments. Progress has been made in improving analytical accuracy in cholesterol measurement, with the development of better methods and materials in recent years. Yet, despite the attention cholesterol has received, it continues to be difficult to measure with accuracy and consistency across the broad range of devices and settings in which it is analyzed. While several studies have found that accuracy with patient samples was good, problems with matrix effects from using processed quality control materials have occurred, thus making it difficult to adequately assess accuracy among laboratories. In addition, the lack of information on accuracy in many laboratory settings where patients are likely to be tested, such as commercial laboratories, physicians’ offices, and mass screening locations, makes it impossible to know whether the accuracy goals established for total and HDL cholesterol are uniformly being met. Even if one could be certain that a laboratory could provide reasonably accurate and precise test results, biological and behavioral factors such as diet, excercise, or illness cause an individual’s cholesterol level to vary. It has been estimated that such factors may account for up to 65 percent of the total variation in an individual’s reported cholesterol measurement. Studies have documented that some individuals’ cholesterol level can vary dramatically from week to week while others’ remains relatively constant. Although some biological variation can be controlled for, by having patients maintain their weight and diet for a modest period prior to measurement, many factors cannot be controlled. Total error from both analytical and biological variability can be considerable, as shown in tables 5.1 and 5.2, where calculations are made for hypothetical total and HDL cholesterol test results at different specified levels. For the purposes of this analysis, which is intended to illustrate the potential range of variability around an actual or known cholesterol level, we used the current goals for total analytical error (+8.9 percent for total cholesterol according to NCEP and +30 percent for HDL according to HCFA) and what is currently known about biological variability from a synthesis of studies (6.1 percent for total cholesterol and 7.4 percent for HDL cholesterol). Both analytical and biological variability can of course be lower or higher than these figures, depending on a combination of factors. The results in tables 5.1 and 5.2 show that a single cholesterol measurement may be highly misleading with respect to an individual’s actual cholesterol value. A total cholesterol value that is known to be 240 mg/dL, for example, may vary as much as 16 percent or range from 201 to 279 mg/dL, when using these error rate assumptions. Similar estimates for HDL cholesterol measurements are presented in table 5.2. The implication of these estimates is that cholesterol levels should be thought of in terms of ranges rather than absolute fixed numbers. Compensating for variation by using the average of at least two cholesterol measurements is in line with the current NCEP guidelines and recent literature on the subject. The most recent NCEP Adult Treatment Panel recommends that a second test be done when an initial measurement has found that total cholesterol exceeds 200 mg/dL and HDL is under 35 mg/dL. In terms of HDL and LDL cholesterol, which have been documented to have analytical and biological variation somewhat higher than total cholesterol, more variability can be expected. CDC officials we interviewed emphasized that considerable scientific work remains before HDL measurement is as well understood as total cholesterol. Authors of a recent study in Clinical Chemistry therefore recommend that as many as four HDL and LDL cholesterol tests be done before making treatment decisions. S. J. Smith et al., “Biological Variability in Concentrations of Serum Lipids: Sources of Variation among Results from Published Studies and Composite Predicted Values,” Clinical Chemistry, 39:6 (1993), 1021. equal to 0.19; with four specimens, the relative range should be less than or equal to 0.21. Having accurate and precise cholesterol measurements is important, given the central role that cholesterol measurement has in classifying, evaluating, and treating patients deemed at risk of coronary heart disease. As noted in chapter 1, the average total cholesterol level for U.S. adults 20 years old and older is about 205 mg/dL, which puts them within the NCEP-defined borderline-high risk category. Moreover, 29 percent of U.S. adults, 52 million people, have a cholesterol level that is classified as too high, making them candidates for dietary therapy. Of this group, an estimated 12.7 million adults, one third of whom have established coronary heart disease, might be considered candidates for drug therapy to lower their cholesterol level. Once drug therapy is initiated, it may need to be maintained for life. Although the NCEP guidelines recognize the problem of measurement variability and the guidelines stress the need for multiple measurements, important consequences can be associated with measurement error. The potential exists, for example, that physicians may not account for measurement problems and may base decisions about patients on incorrect test results. In a worst-case scenario, two types of diagnostic errors could occur: false-positive or false-negative screens. A false-positive screen could result in treating individuals who in fact have a desirable total, HDL, and LDL cholesterol level. A false-negative result would incorrectly reassure an individual that his or her cholesterol level is low. The risk of misclassification would be greatest for those whose measured cholesterol levels are closest to one of the cutpoints. There is less ambiguity when values are well above or below a cutpoint. The likelihood of such errors occurring, however, is greater if physicians rely on only a single cholesterol measurement in making treatment decisions. Continuing efforts are needed to improve the accuracy and precision of lipid measurements so that medical decisions to initiate and continue treatment to lower elevated cholesterol levels can be both effective and efficient. To minimize misclassification problems, it is also important to ensure that physicians who evaluate and treat patients with elevated cholesterol levels are knowledgeable about measurement variability and the need to conduct multiple tests. Officials from HHS reviewed a draft of this report and provided written comments, reproduced in appendix I. In addition, HHS provided draft technical comments that we have incorporated in the text where appropriate. Overall, HHS officials believed that cholesterol measurement has improved substantially in recent years and that accuracy in laboratories across the country is better than what is presented in our report. Regarding general comments on the need for better standardization materials (lyophilized serum pools without matrix effects), we agree that this is a major challenge that must be addressed if measurement is to be improved. This point was made in NCEP’s 1990 report on cholesterol measurement, indicating that this is not a new problem but rather one that was noted previously. HHS did not concur with a recommendation we included in the draft report concerning an assessment of whether problems of patient “misclassification” result from measurement variability. It indicated that information on misclassification already exists and that additional work would only provide further definition of the issue rather than solving known problems such as the effect of matrix effects on measurement accuracy. We recognize that some information on this issue does exist and also understand that further efforts are currently under way, particularly by NIH and CDC, to assess how the NCEP guidelines are being implemented in practice and to evaluate overall laboratory performance. We have deleted our draft recommendation from the final report because these ongoing agency efforts should respond to our concerns about misclassification. We encourage HHS to continue this work and provide the results to the Congress and the general public. The agency also suggested that the discussion of diet and clinical trials that we included in the draft was too brief. We have deleted this discussion from the final report and will address it in more depth in a later report we are preparing on the clinical trial base of information that supports the NCEP guidelines. | Pursuant to a congressional request, GAO reviewed the National Heart, Lung, and Blood Institute's National Cholesterol Education Program (NCEP), focusing on the: (1) different techniques for measuring cholesterol in laboratory settings; (2) accuracy of these measurement techniques; (3) factors that influence individual cholesterol levels; and (4) potential effect of cholesterol measurement variability. GAO found that: (1) the natural daily variation in cholesterol levels and instrument measurement errors make it impossible to pinpoint individual cholesterol levels; (2) over 160 different devices with different technologies and chemical formulations are available to perform cholesterol tests; (3) standard cholesterol tests measure two cholesterol components, total cholesterol, and a related blood fat; (4) research, clinical, and hospital laboratories tend to produce reasonably accurate and precise cholesterol measurements, but little is known about cholesterol measurements in other settings such as physicians' offices and public health screenings; (5) two federal agencies have developed reference methods and quality control testing materials for manufacturers and laboratories to use in assessing their equipment's performance; (6) although cholesterol measurement methods have improved in recent years, there is a large variance in the accuracy and precision of tests performed across a broad range of devices and analytical settings; (7) there has been no overall evaluation of the different instruments and technologies laboratories use to conduct cholesterol tests; (8) biological and behavioral factors, many of which are uncontrollable, cause individual cholesterol levels to vary and may account for up to 65 percent of total variation in individual cholesterol levels; (9) the methods for collecting and handling blood specimens affect cholesterol measurements; and (10) uncertain cholesterol measurements can affect individual diagnoses and treatment decisions. |
DHS’s National Protection and Programs Directorate leads the country’s effort to protect and enhance the resilience of the nation’s physical and cyber infrastructure. The directorate includes the Office of Infrastructure Protection, which leads the coordinated national effort to reduce risk to U.S. critical infrastructure posed by acts of terrorism. Within the Office of Infrastructure Protection, ISCD leads the nation’s effort to secure high-risk chemical facilities and prevent the use of certain chemicals in a terrorist act on the homeland; ISCD also is responsible for implementing and managing the CFATS program, including its EAP. The CFATS program is intended to ensure the security of the nation’s chemical infrastructure by identifying, assessing the risk posed by, and requiring the implementation of measures to protect high-risk chemical facilities. Section 550 of the DHS Appropriations Act, 2007, required DHS to issue regulations establishing Risk-Based Performance Standards for chemical facilities that, as determined by DHS, present high levels of risk; the act also required vulnerability assessments and development and implementation of site security plans for such facilities. DHS published the CFATS interim final rule in April 2007 and appendix A to the rule, published in November 2007, lists 322 chemicals of interest and the screening threshold quantities for each. According to DHS, subject to certain statutory exclusions, all facilities that manufacture chemicals of interest, as well as facilities that store or use such chemicals as part of their daily operations, may be subject to CFATS. However, only chemical facilities determined to possess a requisite quantity of chemicals of interest (i.e., the screening threshold quantity) and subsequently determined to present high levels of security risk are subject to the more substantive requirements of the CFATS regulation. The CFATS regulation outlines a specific process for how ISCD is to administer the CFATS program. A chemical facility that possesses any of the 322 chemicals of interest in the quantities that meet or exceed a threshold quantity is required to use ISCD’s Chemical Security Assessment Tool, a web-based application through which owners and operators of chemical facilities provide information about the facility to ISCD. If ISCD determines that a facility is high risk, the facility must complete and submit to ISCD a standard security plan, expedited security plan, or Alternative Security Program. Tier 1 and tier 2 facilities must use the standard security plan or Alternative Security Program, while tier 3 and tier 4 facilities also have the option to use the expedited security plan. For a facility that submits a standard security plan or Alternative Security Program, ISCD reviews it for compliance with CFATS. If compliant, ISCD issues a letter of authorization and conducts an authorization inspection. If the facility passes the authorization inspection, ISCD issues a letter of approval and the facility implements the approved security plan or program. Subsequently, ISCD conducts compliance inspections to confirm that the facility has implemented its approved security plan or program. For tier 3 or tier 4 facilities that choose to submit the expedited security plan, ISCD reviews the expedited plan to determine if it is sufficient and, if so, issues a letter of acceptance. If the expedited plan is determined to be facially deficient, the facility is no longer eligible to participate in the EAP and must submit a standard security plan or Alternative Security Program. For expedited facilities that receive a letter of acceptance, ISCD does not conduct an authorization inspection because the CFATS Act of 2014 does not provide for this inspection at expedited facilities. However, ISCD intends to subsequently conduct compliance inspections to confirm that the expedited facility has implemented its approved security plan. Regarding the EAP, the CFATS Act of 2014 states that, among other things, DHS is to issue guidance for EAP facilities not later than 180 days after enactment of the act that identifies specific security measures sufficient to meet Risk-Based Performance Standards; approve a facility’s expedited security plan if it is not facially deficient based upon a review of the expedited plan; verify a facility’s compliance with its expedited security plan through a compliance inspection; require the facility to implement additional security measures or suspend the facility’s certification if, during or after a compliance inspection, security measures are insufficient to meet Risk-Based Performance Standards based on misrepresentation, omission, or an inadequate description of the site; and conduct a full evaluation of the EAP and submit a report on the EAP not later than 18 months after the date of enactment of the act to Congress. On May 12, 2015, DHS issued EAP guidance for eligible facilities to use to prepare their expedited plans. DHS fully implemented the EAP about a month later when facilities could submit expedited security plans and certification forms to ISCD. Consistent with the act, DHS developed the guidance within 180 days after the date the act was enacted and identified specific security measures that are sufficient to meet Risk- Based Performance Standards applicable to facilities under DHS’s standard security plan process. The guidance is intended to help facilities prepare and submit their expedited security plans and certifications to ISCD, and includes an example that identifies specific (i.e., prescriptive) security measures that facilities are to have in place. Appendix I provides an example of the EAP’s prescriptive security measures and shows the measures that an EAP facility is to have in place to respond to a threat or actual theft or release of a chemical of interest. ISCD officials told us that, in developing prescriptive security measures for the EAP, they considered various sources, including: lessons learned from approving prior standard security plans and Alternative Security Programs for tier 3 and tier 4 facilities and conducting inspections at these facilities; Risk-Based Performance Standards used to develop a standard security plan or Alternative Security Program; and relevant academic literature, and security directives, guidelines, standards, and regulations issued by other federal agencies, such as the U.S. Army and the Department of Labor. ISCD officials told us that they developed the EAP security measures with clear, specific guidance, so that facility officials would have the information needed to successfully obtain approval of their expedited security plan upon submission. The CFATS Act of 2014 allows facilities to submit only one expedited plan to DHS. Specifically, if ISCD determines that an expedited plan is facially deficient due to an error, the act does not allow facility officials to correct the error and resubmit the plan. In addition, ISCD officials said that prescriptive, clear, and easily understood EAP security measures are needed because the act requires DHS to approve an expedited plan that has all applicable prescribed security measures and does not provide for an authorization inspection under the EAP. Therefore, ISCD’s goal in developing required security measures for an expedited security plan was to ensure that a facility had adequate security in place until inspectors could conduct a compliance inspection at the facility approximately 1 year after approving the plan. ISCD officials also stated that, before and after implementing the EAP, they reached out to industry representatives to ensure that eligible facilities were aware of the EAP and its availability as an option to the standard security plan and Alternative Security Program. Specifically, ISCD held meetings with officials representing the Chemical Sector Coordinating Council, the Food and Agriculture Sector Coordinating Council, and the Oil and Natural Gas Subsector Coordinating Council before issuing the EAP guidance and also contacted them after doing so. ISCD also made presentations about the EAP at the Chemical Sector Security Summit in July 2015, and to other groups, including three labor unions prior to implementing the EAP. In addition, ISCD chemical security inspectors and other staff routinely discuss the EAP when conducting CFATS-related outreach. Officials we interviewed at the three coordinating councils confirmed that DHS had contacted them about the EAP. Also, officials from 8 of the 11 industry organizations we interviewed said they have been generally pleased with DHS’s efforts to communicate with them about the CFATS program in recent years. However, officials from a Sector Coordinating Council stated that ISCD did not accept the council’s offers to assist in developing the EAP guidance and were concerned that ISCD may not accept future offers to work on CFATS issues. A senior ISCD official stated that ISCD did not accept the council’s offers to assist in developing the EAP guidance because the CFATS Act of 2014 required DHS to develop the guidance within 6 months of enactment, which did not allow time to involve all interested stakeholders in developing it. The ISCD official stated that ISCD continues to value stakeholder input, appreciates the desire of Sector Coordinating Council members and other stakeholders to provide input on CFATS materials, and plans to seek input from Sector Coordinating Councils and other stakeholders, as appropriate, on future relevant issues. ISCD officials also told us that they developed draft, standard operating procedures to evaluate expedited security plans and conduct compliance inspections, and that officials used the draft procedures to evaluate expedited plans since the EAP’s implementation. ISCD staff who review expedited security plans have received training on how to do this and vetting an expedited plan is relatively simple and straightforward because it does not require extensive analysis, according to ISCD officials. Specifically, ISCD staff review an expedited security plan to determine if facility officials have checked all required boxes for applicable security measures, adequately explained any planned security measures or material deviations, and signed the required certification. If ISCD staff concludes that all of these things have been done, they recommend that the ISCD Director approve the expedited security plan. ISCD staff prepares a summary of the review, including the recommendation, and provides it to the Director. These standard operating procedures were approved on May 25, 2017. DHS’s report to Congress on the EAP, issued on August 2, 2016, discussed all elements listed in the CFATS Act of 2014, but did not quantify costs associated with the EAP because most of DHS’s initial costs were for salary and benefits and DHS did not require its employees to track the hours they worked on the EAP. DHS also did not quantify associated costs to the regulated community, but stated that it expects that these costs were very low. In addition, DHS’s report did not include a recommended frequency of compliance inspections at facilities that use the program because, currently, there is no mandated frequency for any facility regardless of the type of security plan submitted. DHS noted that it would prioritize conducting an initial compliance inspection at an expedited facility over inspection of a similar facility that received approval of a traditional (i.e., standard) security plan or Alternative Security Program, in part, because that would be the first inspection conducted at the expedited facility. In addition, the report stated that, among other things, it was difficult to assess the effect of the EAP on DHS operations and the operations of facilities because only a single facility had participated in the EAP at the time the report was issued. Our analysis of the DHS report and follow-up discussions with ISCD officials is discussed below. Assess the number of eligible facilities that used the EAP versus the standard process to develop and submit a site security plan. DHS reported that, as of June 2, 2016, it assigned a final tier of 3 or 4 to 2,244 facilities (806 tier 3 facilities and 1,438 tier 4 facilities). Of these facilities, only one facility (a tier 4 facility) submitted an expedited security plan, while 2,194 facilities had submitted a security plan or Alternative Security Program using the standard process, and 49 facilities had yet to submit a security plan or Alternative Security Program. Assess the EAP’s impact on the backlog for site security plan approvals and authorization inspections. DHS reported that, with only a single facility electing to submit an expedited security plan, the EAP had no noticeable impact on DHS’s projected completion date for all authorization inspections and site security plan approvals. ISCD officials told us that if enough facilities use the EAP in the future, DHS would evaluate the EAP’s effect on its CFATS operations. Assess the ability of EAP facilities to submit sufficient site security plans. DHS reported that the only facility to submit an expedited security plan was able to submit a sufficient plan. Assess any impact of the EAP on the security of chemical facilities. DHS reported that it is difficult to assess the impact of the EAP on the security of chemical facilities because only one facility submitted an expedited security plan. DHS noted that the public availability of the EAP guidance would likely have a positive impact on chemical facility security because the guidance can serve as reference material for any facility looking to develop a security plan, regardless of whether that facility is regulated under CFATS. ISCD officials told us that if enough facilities use the EAP in the future, DHS would evaluate the EAP’s effect on the security of chemical facilities. Identify any costs and efficiencies associated with the EAP. DHS reported that it expended significant internal resources to comply with the statutory requirement to develop an EAP, but DHS did not quantify the cost associated with the EAP. According to DHS, the resources expended included costs to develop EAP processes and procedures, and develop the associated guidance and outreach materials. ISCD officials told us that most of DHS’s initial costs were for salary and benefits for federal employees working on the EAP, including policy, compliance, and legal staff who developed the EAP guidance, and information technology staff who updated the Chemical Security Assessment Tool. However, ISCD officials also told us that they were unable to quantify these costs because headquarters employees are only required to track overall hours worked each day versus time spent on individual tasks. ISCD officials stated that they have expended, and expect to continue to expend, minor funding amounts to keep the EAP operational. DHS also reported that it was unable to discern how much time and resources members of the regulated community or other stakeholders expended on activities, such as reviewing EAP proposals or considering whether to use the EAP. However, DHS stated that it expects that EAP costs to the regulated community were very low. Recommend the frequency of compliance inspections that may be required for EAP facilities. DHS discussed factors that can influence the frequency of compliance inspections, but did not quantify a recommended frequency for facilities in the EAP because, currently, there is no mandated frequency for any facility regardless of the type of security plan submitted. According to DHS, a variety of factors can influence the frequency of compliance inspections regardless of the type of site security plan the facility submits, including the facility’s risk-based tier and previous compliance history, the corporate owner’s compliance history, and the number and type of planned measures in the facility’s approved security plan. The report also stated that DHS would consider if a facility elected to submit an expedited security plan when determining the timing of the facility’s initial compliance inspection and frequency of subsequent inspections. Although DHS did not quantify a recommended frequency of compliance inspections, it noted that the election to use an expedited security plan would have the most impact on scheduling the initial compliance inspection because that would be the first inspection DHS would conduct at the facility. In addition, DHS would prioritize conducting an initial compliance inspection at an expedited facility over inspection of a similar facility that received approval of a traditional (i.e., standard) security plan or Alternative Security Program. According to DHS, as of April 2017, 2 of the 2,496 eligible facilities had used the EAP since ISCD implemented it; however, one of the two facilities was no longer in the EAP because ISCD no longer considers the facility to be high risk. ISCD had approved both facilities’ expedited security plans—one before DHS issued the aforementioned report to Congress and one after the report. ISCD officials stated that they have not assessed why only two facilities have used the EAP and do not intend to do so because they did not have a preconceived number of facilities that they expected to use it. They also said that the EAP is one of three options—the expedited security plan, the standard security plan, and the Alternative Security Program—that tier 3 and tier 4 facilities can use. ISCD does not encourage facilities to use the EAP or discourage facilities from using it because facility officials are in the best position to decide which approach is the best option for their facility. Officials representing the two EAP chemical facilities told us that their companies involve small operations that store a single chemical of interest on site and do not have staff with extensive experience or expertise in chemical security. Officials from both facilities said they used the EAP instead of a standard site security plan or Alternative Security Program because the EAP would reduce the time and cost to prepare and submit their security plans. Officials from both facilities also stated that the EAP’s prescriptive nature helped them to quickly determine the security measures required to be in their site security plans. For example, the contractor who prepared the site security plan for one of the two EAP facilities said that the facility probably saved $2,500 to $3,500 in consulting fees by using the EAP instead of a standard security plan. According to ISCD, the first compliance inspection at the one remaining EAP facility is scheduled to start later in calendar year 2017. ISCD and industry stakeholders we interviewed identified several factors that may explain why the EAP has not been more widely used, as discussed below. Timing of the EAP’s Implementation. ISCD officials stated that the timing of the EAP’s implementation may be the primary reason that only two facilities have used it. The officials explained that, by the time ISCD had implemented the EAP, the majority of eligible facilities had already submitted standard site security plans or Alternative Security Programs to ISCD, so it was not worthwhile for the facilities to start over again to use the EAP. For example, ISCD officials told us that they had already approved standard security plans and Alternative Security Programs from about 61 percent (1,463 of approximately 2,400) of facilities that had been assigned to tier 3 or tier 4 prior to the EAP’s implementation. Also, officials from 5 of the 11 industry organizations we interviewed stated that the timing of the EAP’s implementation resulted in limited interest in using the EAP. Prescriptive Nature of the EAP. As previously discussed, the CFATS Act of 2014 required DHS to develop specific security measures for the EAP that are sufficient to meet Risk-Based Performance Standards. ISCD officials and officials from 6 of the 11 industry organizations we interviewed stated that the prescriptive security measures required in the expedited security plan likely deterred some facilities from using the EAP. According to ISCD officials, some industry officials think that certain EAP- required security measures are too strict for tier 3 and tier 4 facilities. Officials we interviewed from 5 of the 11 industry organizations said that some, if not most, EAP-required security measures are more robust or strict than they should be for tier 3 and tier 4 facilities; however, officials from a Sector Coordinating Council and a member organization said that the EAP’s required security measures are fair or appropriate for tier 3 and tier 4 facilities. ISCD officials agreed that some EAP required security measures are strict because the CFATS Act of 2014 requires that DHS develop specific security measures and approve expedited security plans that are determined to not be facially deficient based only on a review of the plan. For example, an industry official told us that a security measure pertaining to screening and inspection of vehicles is too strict. Specifically, the EAP guidance states that a facility must screen and inspect all vehicles for firearms, explosives, or certain materials prior to allowing vehicles access to the facility’s perimeter by visually inspecting the vehicle, using a trained explosive detection dog team, under/over vehicle inspection systems, or cargo inspection systems. ISCD officials told us that this security measure is required because ISCD would not be able to evaluate the capability of a facility’s random or percentage-based screening and inspection program by doing a review of the facility’s expedited security plan; therefore, ISCD requires that EAP facilities apply this requirement to all vehicles prior to accessing a facility’s perimeter. However, ISCD officials and officials from 4 of 11 industry organizations also stated that the EAP’s prescriptive measures actually could encourage some facilities to use the EAP. For example, officials from an industry organization stated that smaller facilities often lack staff with the expertise needed to prepare a standard site security plan or Alternative Security Program and may prefer the EAP because it clearly states what a facility is required to do to meet security measures. This was consistent with the views of the officials representing the two facilities that submitted EAPs, as discussed earlier. Lack of an Authorization Inspection under the EAP. As previously discussed, ISCD conducts an authorization inspection at facilities using the standard process, but does not conduct this inspection at facilities using the EAP. ISCD officials stated that the lack of an authorization inspection under the EAP may discourage some facilities from using it because some facility officials have told ISCD that this inspection provides useful information about their facility’s security. However, ISCD officials also said that some facilities may prefer the lack of an authorization inspection under the EAP because this expedites the approval process for a site security plan compared to the process for a standard security plan or Alternative Security Program. Certification Form Required for the EAP. An ISCD official and an industry official we interviewed told us that the certification form that a facility official must sign under penalty of perjury and submit to ISCD with the expedited security plan, may deter some facilities from using the EAP. For example, the DHS official stated that the form contains strict requirements and could result in the signing official being legally liable and subject to penalties in certain circumstances. However, officials for the two facilities that submitted expedited security plans and certification forms to ISCD told us that they were not concerned about signing the form. Two other factors that could influence facilities’ participation in the EAP are the introduction of revised processes for (1) facilities to provide information to ISCD and (2) ISCD to determine the risk tier for each facility. ISCD officials stated that, in fall 2016, they implemented a revised Chemical Security Assessment Tool for facilities to provide information to ISCD in response to industry concerns, such as asking facilities to answer duplicate questions. In the same time frame, ISCD implemented a revised risk-tiering methodology in response to our prior reports and stakeholder concerns about not addressing all elements of risk (threat, vulnerability, and consequence). ISCD officials said they revised the risk-tiering methodology to enhance its ability to consider the elements of risk associated with a terrorist attack. The revised Chemical Security Assessment Tool, called the Chemical Security Assessment Tool 2.0, includes a revised Top-Screen and a streamlined version of the standard site security plan. ISCD officials said that a primary reason they revised the assessment tool was to eliminate duplication and confusion associated with the original standard security plan. The streamlined security plan, in ISCD officials’ view, flows more logically, is more user-friendly, requires facility officials to write less narrative, does not have ambiguous questions, and pre-populates data from one part to another, so users do not have to re-type the same information multiple times. According to ISCD officials, industry feedback about Chemical Security Assessment Tool 2.0 has been very positive. Officials in 9 of the 11 industry organizations we interviewed told us that they have positive views about the revised assessment tool and that it is better than the original assessment tool. For example, officials from 5 of the 11 industry organizations stated that ISCD had improved the assessment tool by streamlining or eliminating duplicative questions. If the updated tool proves easier to use, it could affect future interest in using the expedited program. Regarding the revised tiering methodology, ISCD initiated a phased approach to re-tier about 27,000 facilities. ISCD officials said these facilities must re-submit Top-Screens using Chemical Security Assessment Tool 2.0 and the revised tiering methodology will be used to determine if each facility is high risk and, if so, assign the appropriate risk tier to the facility. According to a senior ISCD official, the re-tiering efforts are resulting in shifts in the risk assessments for some facilities due to the revised tiering methodology and because many facilities have not submitted new information in 7 or 8 years; however, dramatic shifts in the risk tiers of a large number of facilities are not expected. Nevertheless, ISCD is uncertain about the effect that Chemical Security Assessment Tool 2.0 and the revised tiering methodology will have on the future use of the EAP because ISCD cannot predict the extent to which facilities may be re-assigned from tier 1 or tier 2 to tier 3 or tier 4, or vice versa; assigned to tier 3 or tier 4 and submit an expedited security plan instead of a streamlined standard plan or Alternative Security Program, or vice versa; new to CFATS and assigned to tier 3 or tier 4; or no longer considered to be high risk. Given that only one facility is currently covered by the EAP, and about 27,000 facilities are to ultimately re-submit Top-Screens using Chemical Security Assessment Tool 2.0 and be tiered using the revised tiering methodology, it is too early to tell what impact, if any, the revised CFATS process will have on the future use of the EAP. We provided a draft of this report to DHS for review and comment. DHS did not provide formal comments, but did provide a technical comment, which we incorporated, as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. The following security measures are from Section D of the site security plan example for the Expedited Approval Program. For facilities that prepare an expedited security plan and submit it to the Department of Homeland Security (DHS), facility officials are to put a checkmark next to each applicable security measure that the facility has in place. For each applicable security measure that the facility does not have in place, facility officials are to explain the security measure planned to be implemented in the next 12 months. If the facility has a material deviation from a security measure, facility officials are to explain compensatory measures that provide comparable security. Section D: Response Measures (Risk-Based Performance Standards 9, 11, 13, and 14) D.1.1 ___ The facility has a defined emergency and security response organization in order to respond to site emergencies and security incidents. D.1.2 ___ The facility has a crisis management plan which includes emergency response procedures, security response plans, and post- incident security plans (post-terrorist attack, security incident, natural disaster, etc.). For Release facilities only: D.1.2.1 ___ The facility has additional portions to their crisis management plan, which include emergency shutdown plans, evacuation plans, re-entry/recovery plans, and community notification plans to account for response to Release chemicals of interest. ___ The facility is not regulated for Release chemicals of interest. D.1.3 ___ The facility has designated individual(s) responsible for executing each portion of the crisis management plan and individual(s) have been trained to execute all duties. D.1.4 ___ The facility has the appropriate resources (staff, emergency/response equipment, building space, communications equipment, process controls/safeguards, etc.) to execute all response plans. Emergency equipment includes at least one of the following: A radio system that is redundant and interoperable with law enforcement and emergency response agencies. At least one backup communications system, such as cell phones/desk phones. An emergency notification system (e.g., a siren or other facility-wide alarm system). Automated control systems or other process safeguards for all process units to rapidly place critical asset(s) in a safe and stable condition and procedures for their use in an emergency. Emergency safe-shutdown procedures for all process units. D.1.5 ___ All facility personnel have been trained on all response plans and response plans are exercised on a regular basis and at a minimum of biennially. D.1.6 ___ The facility has an active outreach program with local first responders (Police Department and Fire Department) which includes providing response documentation to agencies, providing facility layout information to agencies, inviting agencies to facility orientation tours, notifying agencies of the facility’s chemicals of interest (regulated chemicals of interest and other chemical holdings identified on Appendix A) and security concern, and maintaining regular communication with agencies. D.2.1 ___ The facility has a documented process for increasing security measures commensurate to the designated threat level during periods of elevated threats tied to the National Terrorism Advisory System and when notified by DHS of a specific threat. D.2.2 ___ The facility will begin to execute security measures for elevated and specific threats within 8 hours of notification. D.2.3 ___ The facility will execute the following measures as a result of an elevated or specific threat: Coordinate with Federal, state, and local law enforcement agencies. Increase detection efforts through either dedicated monitoring of security systems (Intrusion Detection System (IDS) or Closed Circuit Television (CCTV)), increased patrols of the perimeter and/or asset area(s), or stationing of personnel at access points and/or asset area(s). For Theft/Diversion and Sabotage facilities only, increase frequency of outbound screening and inspections. For Sabotage facilities only, increase monitoring of outbound shipments. For Release facilities only, increase frequency of inbound screening and inspections. In addition to the contact named above, John Mortin, Assistant Director, and Joseph E. Dewechter, Analyst-in-Charge, managed this audit engagement. Chuck Bausell, Michele Fejfar, Tracey King, Michael Lenington, and Claire Peachey made significant contributions to this report. | Facilities that produce, use, or store hazardous chemicals could be of interest to terrorists intent on using them to inflict mass casualties in the United States. DHS established the CFATS program to, among other things, identify and assess the security risk posed by chemical facilities. DHS places high-risk facilities into one of four risk-based tiers and inspects them to ensure compliance with DHS standards. The CFATS Act of 2014 created the Expedited Approval Program as an option for the two lower-risk tier facilities (tiers 3 and 4) to reduce the burden and expedite the processing of security plans. The act further required that DHS report on its evaluation of the expedited program to Congress. The CFATS Act of 2014 also included a provision for GAO to assess the expedited program. This report discusses (1) DHS's implementation of the expedited program and its report to Congress and (2) the number of facilities that have used the program and factors affecting participation in it. GAO reviewed laws and DHS guidance, analyzed DHS's report to Congress, and interviewed DHS officials. GAO also received input from officials with three industry groups that represented the most likely candidates to use the program, and officials representing eight of their member organizations. The results of this input are not generalizable, but provide insights about the expedited program. GAO is not making recommendations in this report. For more information, contact Chris Currie at (404) 679-1875 or [email protected] . [ This page was updated to delete a typo .] The Department of Homeland Security (DHS) fully implemented the Chemical Facility Anti-Terrorism Standards (CFATS) Expedited Approval Program in June 2015 and reported to Congress on the program in August 2016, as required by the Protecting and Securing Chemical Facilities from Terrorist Attacks Act of 2014 (CFATS Act of 2014). DHS's expedited program guidance identifies specific security measures that eligible (i.e., tiers 3 and 4) high-risk facilities can use to develop expedited security plans, rather than developing standard (non-expedited) security plans. Standard plans provide more flexibility in securing a facility, but are also more time-consuming to process. DHS's report to Congress on the expedited program discussed all required elements. For example, DHS was required to assess the impact of the expedited program on facility security. DHS reported that it was difficult to assess the impact of the program on security because only one facility had used it at the time of the report. DHS officials stated that they would further evaluate the impact of the program on security if enough additional facilities use it in the future. As of April 2017, only 2 of the 2,496 eligible facilities opted to use the Expedited Approval Program; various factors affected participation. Officials from the two facilities told GAO they used the program because its prescriptive nature helped them quickly determine what they needed to do to implement required security measures and reduced the time and cost to prepare and submit their security plans to DHS. According to DHS and industry officials GAO interviewed, low participation to date could be due to several factors: DHS implemented the expedited program after most eligible facilities already submitted standard (non-expedited) security plans to DHS; the expedited program's security measures may be too strict and prescriptive, not providing facilities the flexibility of the standard process; and DHS conducts in-person authorization inspections to confirm that security plans address risks under the standard process, but does not conduct them under the expedited program. DHS officials noted that some facilities may prefer having this inspection because it provides them useful information. Recent changes in the CFATS program could also affect future use of the expedited program. In fall 2016, DHS updated its online tool for gathering data from facilities. Officials at DHS and 5 of the 11 industry organizations GAO contacted stated that the revised tool is more user-friendly and less burdensome than the previous one; however, it is unclear how the new tool might affect future use of the expedited program. Also, in fall 2016, DHS revised its methodology for determining the level of facility risk, and one of the two facilities that participated in the expedited program is no longer |
The FEHBP is the largest employer-sponsored health insurance program in the country, providing health insurance coverage for about 8 million federal employees, retirees, and their dependents through contracts with private insurance plans. All currently employed and retired federal employees and their dependents are eligible to enroll in FEHBP plans, and about 85 percent of eligible workers and retirees are enrolled in the program. For 2007, FEHBP offered 284 plans, with 14 fee-for-service (FFS) plans, 209 health maintenance organization (HMO) plans, and 61 consumer-directed health plans (CDHP). About 75 percent of total FEHBP enrollment was concentrated in FFS plans, about 25 percent in HMO plans, and less than 1 percent in CDHPs. Total FEHBP health insurance premiums paid by the government and enrollees were about $31 billion in fiscal year 2005. As set by statute, the government pays 72 percent of the average premium across all FEHBP plans but no more than 75 percent of any particular plan’s premium. The premiums are intended to cover enrollees’ health care costs, plans’ administrative expenses, reserve accounts specified by law, and OPM’s administrative costs. Unlike some other large purchasers, FEHBP offers the same plan choices to currently employed enrollees and retirees, including Medicare-eligible retirees who opt to receive coverage through FEHBP plans rather than through the Medicare program. The plans include benefits for medical services and prescription drugs. By statute, OPM can negotiate contracts with health plans without regard to competitive bidding requirements. Plans meeting the minimum requirements specified in the statute and regulations may participate in the program, and plan contracts may be renewed automatically each year. OPM may terminate contracts if the minimum standards are not met. OPM administers a reserve account within the U.S. Treasury for each FEHBP plan, pursuant to federal regulations. Reserves are funded by a surcharge of up to 3 percent of a plan’s premium. Funds in the reserves above certain minimum balances may be used, under OPM’s guidance, to defray future premium increases, enhance plan benefits, reduce government and enrollee premium contributions, or cover unexpected shortfalls from higher-than-anticipated claims. On January 1, 2006, Medicare began offering prescription drug coverage (also known as Part D) to Medicare-eligible beneficiaries. Employers offering prescription drug coverage to Medicare-eligible retirees enrolled in their plans could, among other options, offer their retirees drug coverage that was actuarially equivalent to standard coverage under Part D and receive a tax-exempt government subsidy to encourage them to retain and enhance their prescription drug coverage. The subsidy provides payments equal to 28 percent of each qualified beneficiary’s prescription drug costs that fall within a certain threshold and is estimated to average about $670 per beneficiary per year. OPM opted not to apply for the retiree drug subsidy. The average annual growth in FEHBP premiums has slowed since 2002— declining each year from 2003 through 2007—and was generally lower than the growth for other purchasers since 2003. After a period of decreases in 1995 and 1996, FEHBP premiums began to increase for 1997, to a peak increase of 12.9 percent for 2002. The growth in average FEHBP premiums began slowing in 2003 and reached a low of 1.8 percent for 2007. The average annual growth in FEHBP premiums was faster than that of CalPERS and surveyed employers from 1997 through 2002—8.5 percent compared with 6.5 percent and 7.1 percent, respectively. However, beginning in 2003, the average annual growth rate in FEHBP premiums was slower than that of CalPERS and surveyed employers—7.3 percent compared with 14.2 percent and 10.5 percent, respectively. (See fig. 1). The premium growth rates for the 10 largest FEHBP plans by enrollment— accounting for about three-quarters of total FEHBP enrollment—ranged from 0 percent to 15.5 percent for 2007. Premium growth rates across the smaller FEHBP plans varied more widely. Regarding enrollee premiums—the share of total premiums paid by enrollees—the growth in average enrollee premiums generally paralleled total premium growth from 1994 through 2007. In 2006, average monthly FEHBP premiums were $415 for individual plans and $942 for family plans in total. The enrollee premium contributions were $123 for individual plans and $278 for family plans. Projected increases in the cost and utilization of services and in the cost of prescription drugs accounted for most of the average annual premium growth across FEHBP plans for the period from 2000 through 2007, although projected withdrawals from reserves offset much of this growth for 2006 and 2007. Absent projected changes associated with other factors, projected increases in the cost and utilization of services and the cost of prescription drugs would have accounted for a 9 percent increase in average premiums for 2007. Projected increases in the cost of and utilization of services alone would have accounted for about a 6 percent increase premiums for 2007, down from a peak of about 10 percent for 2002. Projected increases in the cost of prescription drugs alone would have accounted for about a 3 percent increase in premiums for 2007, down from a peak of about 5 percent for 2002. Enrollee demographics— particularly the aging of the enrollee population—were projected to have less of an effect on premium growth. Projected decreases in the costs associated with certain other factors, including benefit changes that resulted in less generous coverage and enrollee choice of plans—typically the migration to lower-cost plans—generally helped offset average premium growth for 2000 through 2007 to a small extent. Projected withdrawals from reserves offset average premium growth for 2006 and 2007. Officials we interviewed from most of the plans stated that OPM monitored their plans’ reserve levels and worked closely with them to build up or draw down reserve funds gradually to avoid wide fluctuations in premium growth from year to year. Projected additions to reserves nominally contributed to average premium growth—by less than 1 percentage point—for 2000 through 2005. However, projected withdrawals from reserves offset average premium growth by about 2 percentage points for 2006 and 5 percentage points for 2007. (See fig. 2.) We also reviewed detailed data available for five large FEHBP plans on claims actually incurred from 2003 through 2005. These data showed that most of the increase in total expenditures per enrollee was explained by expenditures on prescription drugs (34 percent) and on hospital outpatient services (26 percent). Officials we interviewed from several FEHBP plans stated that the retiree drug subsidy would have had a small effect on premium growth had OPM applied for the subsidy and used it to offset premiums. First, drug costs for Medicare beneficiaries enrolled in these plans accounted for a small proportion of total expenses for all enrollees, and the subsidy would have helped offset less than one-third of these expenses. Second, because the same plans offered to currently employed enrollees were offered to retirees, the effect of the subsidy would have been diluted when spread across all enrollees. However, officials we interviewed from two large plans with high shares of elderly enrollees stated that the subsidy would have lowered premium growth for their plans. Officials from one of these plans estimated that 2006 premium growth could have been 3.5 to 4 percentage points lower. Our analysis of the potential effect of the retiree drug subsidy on all plans in FEHBP showed that had OPM applied for the subsidy and used it to offset premium growth, the subsidy would have lowered the 2006 premium growth by 2.6 percentage points from 6.4 percent to about 4 percent. The reduction in premium growth would have been a onetime reduction for 2006. Absent the drug subsidy, FEHBP premiums in the future would likely be more sensitive to drug cost increases than would be premiums of other large plans that received the retiree drug subsidy for Medicare beneficiaries. OPM officials explained that there was no need to apply for the subsidy because the intent of the subsidy was to encourage employers to continue offering prescription drug coverage to Medicare-eligible enrollees, and FEHBP plans were already doing so. The potential effect of the subsidy on premium growth would also have been uncertain because the statute did not require employers to use the subsidy to mitigate premium growth. Officials we interviewed from most of the FEHBP plans with higher-than- average premium growth in 2006 cited increases in the actual cost and utilization of services and high shares of elderly enrollees and early retirees as key drivers of premium growth. Our analysis of financial data provided by six of these plans showed that the average increase in total expenditures per enrollee from 2003 through 2005 was about 40 percent— compared with the average of 25 percent for five large FEHBP plans that represented about two-thirds of total FEHBP enrollment. From 2001 through 2005, the average age of enrollees across all eight plans with higher-than-average premium growth increased by 2.7 years—compared with an average increase of 0.5 years across all FEHBP plans. Officials we interviewed from most of the FEHBP plans with lower-than- average premium growth in 2006 cited adjustments for previously overestimated projections of cost growth and benefit changes that resulted in less generous coverage for prescription drugs as factors that limited premium growth. Our analysis of financial data provided by two plans showed that per-enrollee expenditures for prescription drugs increased by 3 percent for one plan and 13 percent for the other from 2003 through 2005—compared with 30 percent for the average of the five large FEHBP plans. Also, from 2001 through 2005, the average age of enrollees across all six of these plans decreased by 0.5 years—compared with an average increase of 0.5 years across all FEHBP plans. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other Members of the subcommittee may have. For future contacts regarding this testimony, please contact John E. Dicken at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Randy Dirosa, Assistant Director; Iola D’Souza; and Timothy Walker made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Average health insurance premiums for plans participating in the Federal Employees Health Benefits Program (FEHBP) have risen each year since 1997. These growing premiums result in higher costs to the federal government and plan enrollees. The Office of Personnel Management (OPM) oversees FEHBP, negotiating benefits and premiums and administering reserve accounts that may be used to cover plans' unanticipated spending increases. GAO was asked to discuss its December 22, 2006 report, entitled Federal Employees Health Benefits Program: Premium Growth Has Recently Slowed, and Varies Among Participating Plans (GAO-07-141). In this report, GAO reviewed (1) FEHBP premium trends compared with those of other purchasers, (2) factors contributing to average premium growth across all FEHBP plans, and (3) factors contributing to differing trends among selected FEHBP plans. GAO reviewed data provided by OPM relating to FEHBP premiums and factors contributing to premium growth. For comparison purposes, GAO examined premium data from the California Public Employees' Retirement System (CalPERS) and surveys of other public and private employers. GAO also interviewed officials from OPM and eight FEHBP plans with premium growth that was higher than average and six FEHBP plans with premium growth that was lower than average. Growth in FEHBP premiums recently slowed, from a peak of 12.9 percent for 2002 to 1.8 percent for 2007. Starting in 2003, FEHBP premium growth was generally slower than for other purchasers. Premium growth rates for the 10 largest FEHBP plans by enrollment--accounting for about three-quarters of total enrollment--ranged from 0 percent to 15.5 percent for 2007. Projected increases in the cost and utilization of health care services and in the cost of prescription drugs accounted for most of the average annual FEHBP premium growth for 2000 through 2007. Absent other factors, these increases would have raised 2007 average premiums by 9 percent. Other projected factors, including benefit changes resulting in less generous coverage and enrollee migration to lower-cost plans, slightly offset average premium growth. In 2006 and 2007, projected withdrawals from reserves helped offset average premium growth--by 2 percentage points for 2006 and 5 percentage points for 2007. To explain the factors associated with premium growth, officials GAO interviewed from most of the FEHBP plans with higher-than-average premium growth cited increases in the cost and utilization of services as well as a high share of elderly enrollees and early retirees. Officials GAO interviewed from most plans with lower-than-average premium growth cited adjustments made for previously overestimated projections of cost growth, and some officials cited benefit changes that resulted in less generous coverage for prescription drugs. The plans with lower-than-average premium growth also experienced a decline of 0.5 years in the average age of their enrollees compared with an increase of 0.5 years in the average age of all FEHBP enrollees. |
The media industry has its own terminology, and the following glossary provides the definition of terms used throughout this report: Typically, the general public views television programming through broadcast or subscription video service. Broadcast television provides free over-the-air programming to the public through local television stations. By contrast, consumers pay fees for subscription video service to video providers, including cable operators, satellite providers, or telecommunications companies. Programming for broadcast and subscription video service differs, as illustrated in figure 1. Broadcast television consists mainly of four major broadcast networks (ABC, CBS, Fox, and NBC) and several smaller networks, such as the CW Television Network, MyNetworkTV, and ION Television. Each of the four major broadcasters owns and operates some local television stations; other stations can be affiliated with one of the major broadcasters or, as is the case with public television, unaffiliated with the major broadcasters. The four major broadcasters provide scripted and nonscripted programming to the local television stations that is produced either by the major broadcasters’ affiliated production companies or by independent producers. The development process of scripted programs (i.e., drama and comedy series) for prime time programming involves steps that allow major broadcasters to periodically assess the program as it develops, as described in figure 2. In contrast, the development process for nonscripted programs, such as reality programs and game shows, does not involve most of the steps shown in figure 2. Scripts and pilots do not need to be developed for nonscripted programs, making them less expensive to produce than scripted programs. For subscription video service, video providers obtain a variety of programming from both broadcasters (which can include major networks and local stations) and cable networks. Video providers must negotiate with broadcasters and cable networks to air and distribute their programming. Negotiations include the price, terms, and conditions for distribution on the video providers’ systems. Video providers have the discretion to select which cable networks will be available and, subject to negotiation, how they will be packaged and marketed to subscribers. According to a recent FCC report, more than 500 cable networks exist, including national cable networks (such as CNN, Discovery Channel, ESPN, and Fox News) as well as regional cable networks (such as the California Channel, Comcast SportsNet Chicago, and the YES Network). Cable networks can provide niche programming—that is, programming that targets specific demographics. For instance, Lifetime Network offers programming that specifically targets women, while MTV Network targets programming for the 18-to-34 age demographic. The general public receives radio programming through commercial and public radio stations. Over the last 5 years, the number of full-power radio stations has increased from 13,590 in 2005 to over 14,600 in 2009, with the vast majority of these stations being commercial (78 percent, or 11,430 stations) and the remainder being public (22 percent, or 3,198 stations). Following passage of the Telecommunications Act of 1996, concentration in radio station ownership increased significantly because of the act’s relaxation of national and local multiple radio ownership limits. For example, in 1996, the two largest radio station owners held fewer than 65 radio stations each. By contrast, as of 2009, Clear Channel Communications Inc. owned over 800 radio stations (down from 1,135 in 2007), and the second largest group owner, Cumulus Broadcasting LLC, owned about 300 radio stations (see table 2). In 2009, the top 10 radio station owners owned 20 percent of all commercial radio stations. In addition, each radio station has a primary programming format designation that describes the programming content on that station. For example, in 2009, radio station KQSD in Lowry, South Dakota’s, primary format was Classical, its secondary format was News, and its tertiary format was Jazz. As such, the station primarily plays Classical music, but it also provides some news and plays some Jazz. FCC awards licenses to television and radio stations to use the airwaves expressly on the condition that licenses serve the public interest and licensees are responsive to the needs of its local community. Toward this end, FCC has long identified localism, competition, and diversity as its three core goals of media policy. Within this framework, FCC has considered the public interest best served by promoting free expression of diverse views and has promoted program diversity by limiting the number of broadcast outlets any one entity may own. As such, individual radio and television stations generally have discretion to select programming and to determine how best to serve the local community audience. Since the mid-1990s, FCC has amended or repealed a number of rules and regulations affecting the media industry. In 1995, FCC repealed the Financial Interest and Syndication Rules (Fin-Syn rules) so that a major broadcaster can own programming that it airs during prime time hours, as well as own syndication rights to programs purchased from independent producers. Following the repeal of the Fin-Syn rules, each of the four major broadcasters merged with, or acquired an ownership interest in, at least one major production studio. For instance, the Walt Disney Company acquired ABC and developed ABC Television Studio; CBS became affiliated with the studio Paramount Television; and NBC merged with Universal Pictures. In addition, News Corporation—which launched the Fox Broadcasting Network in 1986—owns several production studios, including 20th Century Fox. FCC is required to review media ownership rules every 4 years and determine whether those rules are necessary in the public interest. Although FCC regulates television primarily through ownership rules and station licensing, some of its other rules also affect aspects of television programming. Some of the key rules that affect programming and carriage were adopted in 1992 and are summarized below. Retransmission consent and must carry rules. Under these rules, every 3 years local commercial television stations (including those owned and operated by the major broadcasters) must decide whether to negotiate individual retransmission consent agreements with each cable operator in its designated market area for compensation in exchange for the cable operator’s right to carry the broadcast signal. In lieu of negotiation, stations may elect to require each cable operator in its designated market area to carry its signal (i.e., must carry), without receiving compensation for such carriage. Program carriage rule. This rule prevents a video provider from requiring a financial interest in programming or coercing a programmer (i.e., cable network) to grant exclusive rights as a condition for carriage, or from discriminating against an independent cable network in a way that unreasonably restrains the ability of the network to compete. Commercial leased access rule. Under this rule, cable operators are required to set aside a certain number of channels, depending on the size of the cable system, that can be leased out to independent cable networks for access on its distribution system. Congress has required FCC to (1) determine the maximum reasonable rates that a cable operator may establish for commercial use of the designated channels; (2) establish reasonable terms and conditions for such use, including those for billing and collections; and (3) establish procedures for the expedited resolution of disputes concerning rates or carriage. Major broadcasters and their affiliated studios have produced the majority of broadcast prime time programming in each of the selected years that we analyzed. In particular, major broadcaster-affiliated studios produced from 76 to 84 percent of broadcast prime time programming hours, with the remaining hours coming from independent producers. As shown in figure 3, in most of the years that we reviewed, the share of major broadcaster- produced prime time programs did not change significantly. However in 2008, the prime time programming from independent producers increased slightly compared with such programming in 2005. For the fall 2009 broadcast prime time schedule, the top five program producers as measured in prime time program hours were studios affiliated with ABC, CBS, Fox, NBC, and Warner Bros. These producers provided approximately 76 total programs, amounting to about 82 percent, in the fall prime time schedule. We identified 11 prime time programs that fell into the independent producer category for the fall 2009 prime time schedule. Of those, Sony Pictures Television Studio produced 3 programs, and eight other independent producers each supplied a program. Although most of the programs produced during the years we reviewed were affiliated with major broadcasters, a previous FCC-commissioned study indicated that the number and affiliation of prime time programming producers has changed significantly since the repeal of the Fin-Syn rules in 1995. The study found that in 1995, the top five program producers provided about 54 percent of prime time programming, with three producers affiliated with a major broadcaster. Since basic cable networks are also a source of television programming, we analyzed the ownership of those networks as an indicator of which entities control the television programming on the networks. On the basis of our analysis of ownership interests over the last decade, we found that a number of companies have ownership interests in a basic cable network (cable network), but a much smaller group of companies have ownership interests in 5 or more such networks. From 1998 to 2008, 94 companies on average have owned an interest in at least 1 cable network. The number of companies has declined somewhat over time, however, from a high of 106 companies in 1998 to a low of 81 companies in 2008. Cable network owners include owners of major broadcasters, such as News Corporation, which owns Fox, and Walt Disney Company, which owns ABC; cable operators, such as Comcast and Cablevision; owners of major publications and television stations, such as Tribune Company and Hearst Corporation; and other media companies, such as Liberty Media Corporation and Scripps Networks Interactive. On the basis of our analysis of all the companies with cable network ownership interests from 1998 to 2008, we found a range of 11 to 13 companies that owned an interest in 5 or more networks in at least 1 year. Of these companies, we found a range of 5 to 7 companies that owned at least 12 cable networks over the decade. As shown in figure 4, the number of basic cable networks owned by these top 5 companies has not changed significantly over the last 11 years. Viacom and Walt Disney Company had ownership interests in the most cable networks over the last decade, with each owning more than 20 networks in each year. None of the top five owners has increased the number of cable networks owned since 2001. In 2008, these top five companies owned about half of basic cable networks. We analyzed ownership of the 20 most widely distributed basic cable networks, as measured by the number of subscribers for each year from 1998 to 2008 (referred to as top 20 cable networks). On the basis of our analysis, we found major broadcasters and companies affiliated with both major broadcasters and cable operators combined owned 50 percent or more of the top 20 networks. As shown in figure 5, the number of major broadcaster-owned top 20 cable networks ranged from 6 in 1998 to a high of 12 in 2004 before declining to 8 cable networks in 2008. The number of top 20 cable networks owned by companies affiliated with both major broadcasters and cable operators remained relatively steady during the decade at 3 to 4. Cable operators without a broadcast company affiliation owned 5 of the top 20 cable networks in 1998, but this number declined over time and was zero in 2007 and 2008. In 2008, the last year of our analysis of ownership of the top 20 cable networks, we found 8 cable networks that were affiliated with major broadcasters. For example, 2 top 20 cable networks, ABC Family Channel and Disney Channel, are owned by Disney, a company that also owns the ABC broadcast network. Four networks in the top 20 were affiliated with both major broadcasters and cable operators. For example, in 2008, CNN, TBS, and TNT were owned by Time Warner, a company affiliated with cable operator Time Warner Cable, broadcaster CW Television Network, and television production studio Warner Bros. In addition, 8 networks in the top 20 fell in the “other” category for 2008, because they did not appear to have a direct affiliation with a major broadcaster, cable operator, or satellite provider. Some of the networks in this category, including the Food Network and HGTV network, which are owned by Scripps Networks, could be identified as independent networks. Other cable networks identified as independent networks in other studies, such as the Hallmark Channel and the NFL Network, did not fall into the top 20 cable networks by subscribership in 2008 or in previous years, so they were not included in our analysis. Combining ownership in both prime time broadcast programming and widely distributed basic cable networks, the major broadcasters have had an interest in a significant share of television programming over the last decade. Independent producers have been a source for a smaller share of prime time broadcast programming. Cable operators without a major broadcaster affiliation are not a source of prime time broadcast network programming, and over the last decade their interest in the top 20 most widely distributed basic cable networks has decreased. However, they make programming decisions for the cable networks they own and determine which cable networks will be carried on their cable distribution systems. FCC annually reports on cable network programming variety and ownership as part of its video competition report, but the report does not assess the extent to which the sources of programming affect variety in television and selection choices for the public. Industry stakeholders we interviewed stated that the high cost of developing, producing, and distributing television programs is a significant factor that affects the availability of independent programming in broadcast television. According to television broadcast executives and representatives of independent producers, developing and producing broadcast television programs is costly and financially risky. For example, one report estimated that major broadcasters spent about $120 million for the 1997-1998 season to develop 49 drama pilots and used 14 in their schedules, of which 1 program returned for a second season. Moreover, according to television broadcast executives, once programming is developed, the costs to produce a scripted drama or comedy program range from about $21 million to $48 million for 21 program episodes per season, with no guarantee that a program will continue to be produced for another season. Producers need to sell their program ideas to major broadcasters and secure financing to cover the costs of developing and producing scripted television programs. Because of their large size and access to capital, major broadcaster-affiliated studios and other large unaffiliated studios often have the ability to finance development and production costs. However, representatives of independent producers stressed that it is difficult for them to obtain financing for development and production costs, and oftentimes they must secure financing through the major broadcaster-affiliated studios. The independent producers said since major broadcasters have the ability to finance production costs and make programming decisions, it results in seven or eight companies controlling a significant portion of the program content on television. When selecting programming for prime time, television broadcast executives told us that they strive to air programming that will achieve high ratings. Advertisers will generally pay more for programs that achieve higher ratings, and since major broadcasters rely on advertising revenue, it is in their financial interest to select programs that will accrue the high level of audience that drives advertising revenue. Television broadcast executives and an academic expert we contacted stated that they also consider quality for prime time programming, and not necessarily the source of programming (i.e., whether the program was produced by an independent producer or an affiliated production studio). They said quality programming will attract the largest share of viewers, which in turn, drives advertising revenue. Further, they stated that since advertisers spend less overall during times of economic downturn and have multiple choices for their advertising dollars (such as on cable television and the Internet), it is all the more essential to have quality programming to attract the advertisers. While television broadcast executives said that it is the quality, not the source, of programming that influences the selection of prime time programming, major broadcasters are, nevertheless, financially invested in the affiliate-produced programs and stand to gain additional profits if the affiliated programming makes it to syndication. Consequently, some stakeholders said broadcasters might choose their own programming over that of independent producers. In particular, according to an academic expert and representatives of independent producers, if both major broadcaster-affiliated studios and an independent producer offer similar genre and programming content to a major broadcaster, the major broadcaster will select the program from its affiliated studio over an independent producer because of these financial interests. As we previously noted, major broadcaster-affiliated studios (5 companies) produced 82 percent of prime time programming in the fall 2009 prime time schedule. While independent producers most likely would be unable to produce and distribute programming without some financial arrangements with major broadcasters, they said working under the major broadcasters’ control could cause them to lose creative control of the program’s content, with the writing of the program being directed by the studio bearing the financial risk of production. For example, an independent producer cited the replacement of a writer for CBS’s The Education of Max Bickford, a drama on the major broadcaster’s 2001 prime time schedule, when creative differences arose with the major broadcaster that owned the program. For carriage on cable television, stakeholders cited (1) economic factors, (2) finite capacity, and (3) federal law as affecting carriage of new independent networks. Economic factors. Representatives of independent networks and some video providers said economic factors affect carriage of new independent networks and their programming. According to video providers, it is difficult to determine the cost and value of new independent networks and how many subscribers will be gained based on concepts and business plans of unproven independent networks. Representatives of independent networks we contacted and a study we reviewed indicated that a new network usually faces considerable uncertainty as to whether it will be distributed by a sufficient number of video providers to make its operations viable. Similarly, an academic study indicates that for new networks, there is a high cost to sustaining operations while attracting a sufficient number of video providers and their subscribers. For instance, one report stated that cable network Fox News Network had invested over $150 million by the time it launched in 1996, but it was expected to lose up to $400 million in the next 5 years. Representatives of independent networks told us that it is difficult to obtain financing for a new cable network because commercial banks want a network to secure carriage with a major cable company, such as Comcast, before extending financing to it. By contrast, cable networks developed by cable operators, major broadcasters, or other media companies are generally more able to finance the development of affiliated networks over new independent networks. As our analysis indicated, major broadcasters and their affiliated companies owned at least half of the most widely distributed cable networks. Basic cable networks that are affiliated with cable operators, major broadcasters, or other media companies can negotiate carriage of an affiliated cable network as part of an agreement for carriage of an established affiliated network. For example, the Walt Disney Company owns ESPN, SoapNet, and ABC Family cable networks, along with ABC. According to representatives of small cable operators, during the course of negotiating for carriage for ESPN, they must also carry ESPN’s spin-off cable networks, including ESPN2 and ESPNEWS. In another example, a new cable network—Wedding Central—that is affiliated with cable operator Cablevision was launched in August 2009 on its distribution system. Finite capacity. Stakeholders also cited finite capacity in cable system infrastructure of some video providers as a technical issue that affects selection and availability of independent programming. Representatives of video providers we contacted commented that although their overall capacity to carry television programs has expanded with advanced technology, it remains finite. Because cable operators and telecommunications companies offer a wide array of services over their broadband networks, they must determine how to allocate their systems’ capacity among these multiple services. Representatives of cable operators and television broadcast executives told us that adding another cable network—independently produced or otherwise—when more than 75 already exist in basic cable, might not be considered the most efficient use of cable operators’ resources and capacity. For example, given the demand for high-speed Internet services, cable operators told us they want to ensure they are using the finite capacity of their systems efficiently to be able to meet that demand. Despite the constraints on capacity in the cable system infrastructure, representatives of video providers and television broadcast executives we spoke with noted that alternative distribution platforms, such as online video streams, have provided more outlets and opportunities for independent programming. For instance, in 2007, two independent producers produced a television drama called Quarterlife, which was aired on the social network Web site MySpace.com. On the other hand, television broadcast executives and representatives of independent producers we contacted commented that although the Internet provides the opportunity for distribution of independent programming, it does not translate to success with regard to attracting the number of viewers that television offers. Federal law. Stakeholders cited, and studies have reported, that FCC rules and regulations implementing certain federal statutes can also influence programming decisions. Retransmission issues. As we previously mentioned, representatives of some video providers stated that the business practice of bundling networks—meaning that certain networks are sold as a package with broadcast networks rather than being sold individually—which may occur during negotiations between broadcasters (which can include major networks and local stations) and video providers for retransmission rights. Such bundling influences video providers’ carriage decisions and limits their ability to select independent programming. In 2004, we reported that because the terms of retransmission agreements often include the carriage of major broadcaster-owned cable networks, cable operators sometimes carry cable networks they otherwise might not have carried. Representatives of some video providers told us recently that this practice also fills their systems’ capacity, leaving less capacity for independent cable networks and making it difficult for independent cable networks to gain carriage. Television broadcast executives, on the other hand, commented that negotiations in lieu of invoking the retransmission rule may be necessary for them to be fully compensated for their content. As part of its annual report on the status of competition in the delivery of video programming, FCC is currently seeking data and analysis on implementation of the retransmission consent rules. FCC also has a separate proceeding specifically looking at revisions to the retransmission consent rules and whether it would be appropriate to preclude the practice of programmers tying desired programming with undesired programming, such as tying carriage of a major broadcaster-owned cable network to retransmission conditions for a broadcast signal. The comment period for the notice closed in December 2007, and FCC officials are currently reviewing comments. Program carriage rule. Representatives of independent cable networks and public interest groups stated that although the program carriage rule is needed to promote independent programming, FCC criteria for determining discrimination on the basis of affiliation are unclear. They told us more precise standards for proving discriminatory or exclusionary conduct by cable operators as well as the establishment of a time frame for FCC to determine whether the complaining independent cable networks have sufficient evidence to proceed to a hearing would make the rule more effective. According to independent cable network representatives, some independent cable networks have waited over a year before FCC determined whether it would conduct a hearing. Because the independent cable network is not being carried by the defendant cable operator in the interim, some independent cable networks can go out of business before a decision is made. Representatives of cable operators, on the other hand, stated that the rule is not necessary because a cable operator’s decision to reject a network could be based on the program quality and similarity of content and not on the ownership of a network. Leased access rule. In the case of the leased access rule, a public interest group official indicated that this rule has not achieved what it was intended to do because the prices for leased access were set too high. Representatives of cable operators explained that the rule forces cable operators to carry programming even if they believe the channel does not bring much value to the subscribers. Representatives of cable operators cited home shopping channels as an example of programming that relies on leased access to gain carriage. The rule also affects the cable operators’ ability to carry other programming because the set-aside channels consume capacity that could be used for other programming. Cable operators also noted that the rule does not apply to satellite providers and their systems. In selecting radio station formats and music playlists, stakeholders we interviewed stated that (1) advertisement revenue, (2) cost of programming, and (3) market competition are key economic factors that influence programming decisions in commercial radio. Advertisement revenue. Commercial radio stations are primarily funded by advertisement revenue obtained from selling radio time to companies seeking to reach specific demographic segments. Radio station owners and experts told us that when making decisions about format and playlist selection, program directors will consider the number of listeners that programming will likely attract, and, in turn, the advertisement revenue they may earn. The rates that a station obtains for advertising time depend on the station’s ability to attract listeners within the advertisement companies’ target demographic segment, the length of the advertisement spot, and the size of the market, with larger markets typically receiving higher rates than smaller markets. Radio stations compete for listeners and advertising revenue with other stations within their respective local markets. Consequently, radio stations continuously examine their programming content to try to attract an audience that is highly desirable to advertisers. In particular, a radio station’s format enables it to target specific segments of listeners sharing demographics that appeal to advertisers. According to a radio industry expert, if the advertising market is not interested in reaching the specific target audience of a music format, the station will not be able to survive economically because it will not be able to gain enough ad revenue. Moreover, radio station owners with stations in different markets but of the same format can be more effective at attracting revenue from advertisers who want to reach a similar demographic in multiple markets. Cost of programming. Another economic factor that influences programming decisions is the cost to produce radio content. For example, radio station owners and experts told us that increased costs and decreased advertisement revenue over the past decade have led to an increase in the use of voice tracking and syndicated programming. According to radio station owners and experts, voice tracking is less costly than producing shows for individual markets, and to save programming costs, some stations choose to import programming from another market during peak listener times rather than hire their own radio personalities. In addition, radio industry experts pointed out that historically, stations in small markets have generally relied on nationally syndicated programming to bring in marketable talent that will allow them to compete with other stations in the market. Some stakeholders have expressed concern that voice tracking and syndicated programming are replacing local programming and therefore the needs and interests of the local community are not being reflected by the voice-tracked or syndicated programming. However, representatives of radio station owners have stated that there is no evidence that voice tracking or syndicated programming diminishes localism. For example, one station owner pointed out that the value of programming is determined by how strongly it resonates with listeners, regardless of where it originates. Market competition. Marketplace factors, such as the extent of competition in a given market, also affect programming decisions. For example, radio station owners stated that when radio station program directors are trying to determine a station’s format, they will consider what formats are currently available in the local market and what formats are missing. If there are already stations programmed with a popular format in a given market, a radio station will likely look to competitively differentiate itself by selecting a format targeted toward a demographic that is not currently being served. In doing so, a station may also better compete for audiences and advertising revenues with other media. Experts and representatives of independent producers told us that radio station formats have become more specific in recent years in an attempt to enable stations to target a specific demographic and attract advertisers, and as a result, radio station formats have changed over time. According to radio station owners, the number of radio station formats has increased. Representatives of radio station owners conducted a study examining radio station formats, and found that from 2001 to 2005, the number of radio station formats increased by 7.5 percent. Station owners have characterized this increase in the number of formats as an increase in variety in radio programming. However, some experts and representatives of independent producers have noted that formats with different names often have similar playlists, diminishing real variety among those formats. For example, one expert noted that it is very difficult to discern differences in playlists between radio formats such as Rock and Light Rock. Stakeholders stated that in both commercial and public radio, programming decisions such as selection of format and music playlists are based on the interests of listeners in a given market. Radio station owners in both commercial and public radio reported that program directors will conduct research related to the demographics and preferences of the listeners in their markets to ensure they are meeting the needs of their community. In commercial radio, understanding the interests of listeners in a given market is important for the station to attract a large audience and, as previously noted, attract advertising revenue. According to radio station owners, program directors are expected to be familiar with music interests in their markets and make programming decisions that will be successful in reaching an audience within their market. A stakeholder also noted that even among similarly formatted radio stations, the playlist will vary to meet the needs of the local market. For example, the type of country music that is popular in Tucson, Arizona, can be very different from popular country music in New York City. According to our analysis, in 2009, the 10 most common formats across all national radio stations included Country, News, Christian, Adult Contemporary, Oldies, Sports, Christian Contemporary, Variety, Classic Rock, and Talk, as shown in figure 6. We found that within selected individual markets, the top radio formats differ from the top radio formats nationally, indicating that programming decisions are locally based on the preferences and interests of listeners within a given market. For example, the most popular radio station formats in New York City (the largest Arbitron market) include 5 formats not reflected in the top 10 national radio formats (Alternative, Spanish, Contemporary Hit Radio, Ethnic, and Adult Album Alternative). In addition, we found 19 percent of all stations in the New York market were designated as Ethnic and Spanish formats compared with 7 percent nationwide, suggesting that programming decisions among radio stations in this market reflect the demographics and interests in the market. By comparison, in Chicago, Illinois (the third-largest Arbitron market), we found that 11 percent of stations in this market were designated as Ethnic and Spanish formats. Furthermore, formats that were among the most popular in Chicago but not in New York included Christian, Talk, and Rock (see fig. 7). As is the case in commercial radio, representatives of public radio reported that programming decisions are locally based on the preferences and interests of listeners within a given market; however, they said their community service orientation also influences programming decisions. Representatives of public radio explained that local public stations select their own formats and determine their own audience strategies based on their understanding of local community needs, and their role in serving those needs. They also said the cost of programming is a final consideration for public radio stations after quality- and mission-related factors are considered. In addition, representatives of public radio noted that public stations generally play music from artists that are signed to small, independent labels. Independent labels generally seek out a station if the station’s format includes music similar to that of the labels, and will then establish relationships with such stations. On the basis of our review of 2009 format data for commercial and public radio stations, we found that the top 10 formats in public radio differ from the top 10 formats in commercial radio (see fig. 8). Only two formats (News and Spanish) were among the top 10 formats in both commercial and public radio. Stakeholders that we interviewed generally agreed that since 1996, the number of stations owned by a single radio station owner has increased; however, viewpoints varied about the extent to which consolidation has affected programming decisions. Experts and representatives of independent producers we contacted stated that the elimination of the radio ownership limits in 1996 resulted in an increase in the number of stations owned by a single station owner nationally and in local markets. Independent producers have reported that the radio station holdings of the 10 largest radio station owners have increased significantly. On the basis of our analysis, we found that the share of commercial stations owned by the top 10 station owners did increase, from 4 percent in 1996 to 20 percent in 2009. However, throughout that period, the top 10 radio station owners did not own more than 21 percent of all commercial stations, as shown in figure 9. In addition, we analyzed data for the top 10 national radio station owners in 2009 and found that for most owners (7 out of the 10 owners), stations’ formats were differentiated within individual markets. For example, Clear Channel—the largest radio station owner—owns multiple radio stations in 148 Arbitron markets. We found that in most of those markets (72 percent), Clear Channel programmed its stations with different formats, while in 28 percent of those markets some stations were programmed with the same format. As illustrated in table 3, among the station owners that we reviewed, those with the highest percentage of overlap among radio stations in the same market included American Family Association (78 percent), Cox Radio (56 percent), and Educational Media (56 percent). We also found that 75 percent of the markets where format overlap did exist included large markets with 30 or more radio stations. Radio station owners and representatives of independent producers offered different perspectives on how consolidation in the radio industry has affected programming decisions nationally and in individual markets. On one side, radio station owners and experts told us that to remain financially viable, stations have had to eliminate duplicative operating and overhead expenses and establish a business model where one program director is responsible for programming decisions for multiple stations. Some station owners added that program directors overseeing programming decisions for stations in multiple markets make decisions based on the interests of listeners within the individual markets. Further, radio station owners and experts have reported that common ownership of multiple stations in a single market benefits the audience in that market, as the station owner will choose to diversify formats among its stations to attract a large share of the listening audience in the market. Another viewpoint expressed by representatives of independent producers and experts is that the increased consolidation has changed the stations’ decision-making structure, resulting in homogenized programming decisions across markets and resulting in large companies using centralized methods to make programming decisions. According to this view, as jobs are consolidated when one entity owns multiple stations, one program director may make similar programming decisions across multiple stations in different markets. The independent producers said that as a result, playlists of radio stations owned by the same owner will overlap. Studies conducted by representatives of independent producers and academic experts examined playlists of radio stations owned by the same owner across all markets and found overlap in playlists of stations with the same format. For example, a December 2006 study published by the Future of Music Coalition found examples of overlap among playlists of individual stations owned by the same company in different markets— such as an overlap for the playlists of two country stations located in different markets (WQRB-FM in Eau Claire, Wisconsin, and WRWD-FM in Poughkeepsie, New York). However, the study did not examine overlap and differences among playlists of owners’ radio stations in the same market. A January 2006 study conducted by an academic expert also examined playlist data for each owner’s radio stations and found that the playlists of radio stations in different markets overlapped, but that the playlists of radio stations in the same market were different. We provided a draft of this report to FCC for official review and comment. FCC provided technical comments that we incorporated where appropriate. FCC’s written comments appear in appendix III. We will send copies of this report to the Chairman of the Federal Communications Commission and appropriate congressional committees. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To obtain information on the extent to which sources of programming in television have changed over last decade, we analyzed available data on two major sources of television programming—companies producing prime time broadcast television programs and companies with cable channel ownership interests during the last decade. We focused on programs broadcasted during prime time because that is the block of time on television with generally the most viewers and in turn generates the most advertisement revenue for networks. To determine which companies produced prime time broadcast television programming, we used a previous Federal Communications Commission (FCC) study and the International Television & Video Almanac to classify prime time programs into two categories: (1) programs produced by major broadcasters, and (2) programs produced by independent production companies not affiliated with a major broadcaster (independent producers). We analyzed the fall prime time schedules in 2002, 2005, 2008, and 2009 and classified them in the two categories. We selected these years because annual data that tracked production information in the two categories were limited; FCC’s previous study contained data in the two categories for 2002. We then conducted our analysis for every third year (2005 and 2008) using the Almanac’s production company information for each television program in that year’s debut fall broadcast prime time schedule and classified the programs into the two categories. We also analyzed the Almanac for the 2009 fall prime time schedule to provide the most current data available. Additionally, since basic cable networks are also a source of television programming, we analyzed the ownership of those networks as an indicator of which entities control the television programming on the networks. To determine cable network ownership over the last decade, we used data from SNL Kagan, which show companies having an ownership interest in each of the basic cable networks from 1998 to 2008. We analyzed these data to determine the types and the number of companies that have had an ownership interest in basic cable networks and the companies that owned the largest number of networks during this period. To analyze cable network ownership for the most widely distributed networks, we used the 20 basic cable networks with the most subscribers (the top 20 networks) from 1998 to 2008 and classified the networks into one of four categories: (1) networks owned by major broadcasters, (2) networks owned by video providers, (3) networks owned by both major broadcasters and video providers, and (4) networks owned by other types of companies. We also examined the top 20 networks in 2008 for any independent cable networks; that is, any network that did not have an affiliation with a major broadcaster or video provider, or an affiliation with a major holding company with media interests. To determine the factors and conditions that stakeholders identified as affecting the availability of independent programming in television and factors that influence radio programming decisions, we interviewed or obtained written comments from a variety of experts and industry stakeholders, including academics, industry representatives, media companies, and public interest groups (as shown in table 4) to obtain their views on the factors that affect the availability of independent programming in television and radio. We selected the experts and stakeholders based on relevant published literature, including FCC filings and reports, stakeholders’ recognition and affiliation with a segment of the media industry (i.e., cable operators, satellite providers, broadcasters, radio station owners, independent radio advocacy groups, and so forth), and other stakeholders’ recommendations. In our selection of experts and stakeholders, we intended to obtain balanced and diverse views; we did not weight experts’ and stakeholders’ views but grouped similar stakeholders that represent a segment of the media industry. We conducted semistructured interviews and analyzed the responses to determine patterns and the extent to which the experts and stakeholders agreed on the key factors affecting independent programming and radio programming decisions. We also spoke with FCC officials and reviewed the relevant laws, regulations, literature, comments filed by stakeholders in various FCC proceedings, FCC studies, and FCC- sponsored research on television and radio programming. In addition, for radio, we examined radio station formats, indicating the genre and types of programming, such as Adult Contemporary, Country, News, Sports, and Talk, a station might play. We obtained historical data on the distribution of radio stations by their primary formats nationwide and in local markets from 1999 to 2003 and format data from the Broadcast Investment Analyst Financial Network’s (BIAfn) Media Access Pro Database, containing station-level data for commercial and public radio stations in the United States from 2004 to 2009. Although the BIAfn format data provide a general overview of the genre of programming aired on a given radio station, they do not identify specific programming content that is played on the station. We did not look at independently produced programming on radio because national playlist data identifying record label affiliation are not available. We analyzed the data to determine programming variety and distribution of radio stations by their format nationwide and in local markets in 2009. To highlight programming variety in local markets, we selected two radio station markets-New York and Chicago-and analyzed the format data of radio stations in those markets and compared them with national radio station format data in 2009. We selected New York and Chicago because these two markets are similar in size, (New York is the largest market, and Chicago is the third-largest market) but have different demographic populations. In addition, each market contains both commercial and public stations, FM and AM stations, and contains multiple radio station owners in the market. To highlight similarities and differences in programming variety among commercial and public stations, we examined 2009 format data for commercial and public radio stations and identified the top 10 most popular formats (based on the number of stations with the particular formats available) for each group nationwide. Finally, to examine programming variety for each owner’s radio stations and consolidation in the radio industry, we selected the top 10 radio station owners—that is, owners who own the most radio stations nationwide—and reviewed format data of stations owned by the top 10 owners. To identify the top 10 radio station owners in 1996-1998, 2000-2002, 2007, and 2009, we used data from FCC reports and the BIAfn database. The top 10 radio station ownership data were not available in 2003-2006 and 2008. Collectively, in 2009, the top 10 owners owned a total of 2,262 commercial radio stations, or 20 percent of all U.S. commercial radio stations. In addition, the top 10 owners owned stations that reach a 44 percent share of total Arbitron listeners in the United States and collect 52 percent of the radio industry’s revenue. For each station owner, we then examined similarities and differences in formats among commonly owned radio stations in the same market. We also reviewed studies on radio programming for information on radio station playlists and the extent to which playlists for commonly owned radio stations overlap in the same market. To assess the reliability of the basic cable network data obtained from SNL Kagan, and radio data obtained from BIAfn used in our analysis, we (1) obtained information from the system owners on their data reliability procedures, (2) reviewed systems documentation, (3) reviewed data to identify obvious errors in accuracy and completeness, and (4) compared the data with information we obtained from other sources, including FCC studies. After reviewing the data sources, we determined that the data were sufficiently reliable for the purposes for which we have used them in this report. We conducted our work from May 2009 to March 2010 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Twentieth Century Fox Film Corporation American Broadcasting Companies Inc. American Broadcasting Companies Inc. Arbitron Inc. AT&T Intellectual Property II L.P. CBS Broadcasting Inc. CBS Broadcasting Inc. Clear Channel Communications Inc. Comcast Sports Management Services LLC Consumers Union of United States Inc. Cox Communications Inc. Cox Radio Inc. Directv Inc. Edison Media Research Inc. Entertainment Communications Inc. ESPN Inc. Twentieth Century Fox Film Corporation Twentieth Century Fox Film Corporation Hallmark Licensing Inc. Home Box Office Inc. Scripps Networks Inc. Independent Film & Television Alliance Corporation ION Media Networks Inc. Lifetime Entertainment Services LLC Viacom International Inc. MynetworkTV Inc. National Public Radio, Inc. NBC Universal, Inc. News Holdings Pty Ltd. Saga Communications Inc. Showtime Network Inc. Disney Enterprises Inc. Superstation Inc. Television Food Network, G.P. , et Al. Time Warner Inc. In addition to the contact above, Sally Moino, Assistant Director; Amy Abramowitz; Brad Dubbs; Alana Finley; Bert Japikse; Delwen Jones; Jennifer Kim; Maria Mercado; and Andrew Stavisky made key contributions to this report. | The media industry plays a vital role in informing and entertaining the public. Media ownership and the availability of diverse programming have been a long-standing concern of Congress. Despite numerous programming choices in television and radio available to the public, some studies have reported that independently produced programming--that is, programming not affiliated with broadcast networks or cable operators--has decreased through the years. This requested report discusses (1) the extent to which the sources of television programming have changed over the last decade, (2) the factors industry stakeholders identified as affecting the availability of independent television programming, and (3) the factors industry stakeholders identified as influencing programming decisions in radio. To address these issues, GAO analyzed data from the Federal Communications Commission (FCC) and industry on sources of broadcast television programming in prime time (weeknights generally from 8:00 p.m. to 11:00 p.m.) and companies owning cable networks, as well as radio format data to determine programming variety. GAO also reviewed legal, agency, and industry documents and interviewed industry stakeholders, public interest groups, and others. GAO provided FCC with a draft of this report for comment. In response, FCC provided technical comments that we incorporated where appropriate. The sources of broadcast and basic cable television programming have changed little in recent years. As a source of programming for prime time television, major broadcasters (ABC, CBS, Fox, and NBC) and their affiliated studios produced the majority of programming in each of the selected years that GAO analyzed. In particular, GAO found major broadcasters produced about 76 to 84 percent of prime time programming hours. The remaining programming came from independent producers, which are not affiliated with the major broadcasters. Since basic cable networks are also a source of television programming, GAO analyzed the ownership of those networks as an indicator of which entities control the television programming. On the basis of GAO analysis of ownership in the 20 most widely distributed basic cable networks, major broadcasters and companies affiliated with both major broadcasters and cable operators have owned half or more of the top 20 cable networks for each year reviewed. Combining ownership in both prime time programming and basic cable networks, the major broadcasters have controlled a significant share of television programming over the last decade. Stakeholders primarily cited economic factors as influencing the availability of independent television programming. In this regard, producers GAO contacted stated that developing and producing broadcast television programs is costly and financially risky. And while funds need to be secured early on in the development and production process to finance these costs, independent producers stressed that it is difficult to obtain financing for production costs. For cable television (viewed through a subscription video service), representatives of independent cable networks said a new network faces considerable uncertainty as to whether it will be distributed by a sufficient number of video providers (such as Comcast and DirecTV) to make its operations viable. By contrast, cable networks developed by cable operators or major broadcasters are able to negotiate distribution of the network with video providers as part of an agreement for distribution of an established affiliated network. For radio, stakeholders cited economic factors, local community interests, and consolidation in the radio industry as influences on programming decisions. Among both commercial and public radio stations, stakeholders said that programming decisions are based on listeners' interests in a given market. GAO found that within two of the three largest local markets nationwide, many of the most common local radio formats differ from the most common radio formats nationally, indicating that programming decisions are affected by local community interests. Over the last 10 years there has been consolidation in the radio industry; however, stakeholders' opinions varied about the extent to which consolidation has affected programming decisions. While some studies show that consolidation has led to homogenized radio playlists in different markets nationwide, GAO's analysis shows diverse formats and preferences are reflected within individual local markets. |
Since FPS was created in 1971, as part of GSA, it has been responsible for providing law enforcement and related security services to all federal facilities held or leased by GSA. Specifically, FPS is responsible for, among other things, (1) hiring security guard contractors and overseeing contract guards deployed at federal facilities, (2) controlling access to federal facilities, (3) responding to incidents, (4) enforcing property rules and regulations, and (5) conducting criminal investigations and facility security assessments (FSA). To accomplish this facility protection mission and other responsibilities, as of October 2014, FPS has about 1,200 full-time employees located in its headquarters and 11 regional offices around the country. FPS also has about 13,500 contract security guards deployed at approximately 5,650 of the almost 9,000 federal facilities it protects. To fund its operations, FPS charges fees for its security services to federal tenant agencies in GSA-controlled facilities. For fiscal year 2014, FPS expected to receive $1.3 billion in fees. In the 1980s, some federal departments and agencies raised concerns that GSA was not providing quality building services, including the physical security provided by FPS, in a timely manner. In response, GSA’s Administrator decided to establish a delegation of authority program that would primarily decentralize building services such as security and lease management. A 1985 Executive Order also directed GSA to delegate its building operations authority to tenant agencies when it was feasible and economical. To make this determination, GSA required agencies to maintain program and financial data, which GSA reviewed to determine whether to grant a delegation. When FPS transferred from GSA to DHS in 2002, this delegation of authority program also transferred. Under the program, FPS is responsible for reviewing delegations for law enforcement and security services and determining ─based on cost and capabilities analysis─ if it is in the best interest of the government to authorize another department or agency to protect a federal facility instead of FPS. FPS also is responsible for ensuring that these delegated facilities are protected in a manner consistent with the Interagency Security Committee’s (ISC) standards. A law enforcement delegation of authority authorizes an agency to enforce federal laws and regulations aimed at protecting the agency’s federal facilities identified in the delegation and the employees and public who work in and visit those facilities; conduct investigations related to offenses against the property and persons on the property, and arrest and detain persons suspected of federal crimes. A delegation of authority for security services typically authorizes an agency to manage its own contract guard program at the specified federal facilities, including awarding and administering contracts, and ensuring that guards are properly trained and certified to protect those facilities. An agency may also receive a delegation of authority for both law enforcement and contract guard services. Delegations of authority are generally granted for about 2 to 5 years, but the expiration dates for some existing delegations are not specified or the delegation indicates that it will continue until terminated by FPS, according to FPS officials. In response to congressional direction, in November 2012, FPS issued its Interim Plan, which outlines its current process for reviewing delegations of authority. This process, which is managed primarily by FPS headquarters staff (one full-time employee and three part-time employees) in coordination with its 11 regional offices, includes four phases. During this phase, which began in 2010 and is still ongoing, FPS has focused on identifying delegations of authority that were primarily granted when FPS was part of GSA because FPS at that time did not have a centralized recordkeeping system. As part of this identification process, FPS contacted its 11 regions and GSA to determine if they had copies of delegations. In addition, in some instances, FPS obtained information about an existing delegation from agencies that were granted such authority. FPS uploaded the information it collected from these delegations into an electronic database. During this phase, the Interim Plan calls for FPS to conduct cost and capabilities analyses to determine whether to renew or rescind an existing delegation or grant a new one. To perform the cost analysis, FPS developed a cost estimation model, which establishes a standardized process for assessing the financial impact of each delegation of authority. As part of this cost analysis, FPS compares its and the delegated agency’s costs of providing law enforcement or security services. For example, to estimate the current resources expended by the delegated agency and to determine the cost that FPS would be expected to incur if the delegation were rescinded, FPS reviews data on the amount it would spend and the amount the agency currently spends on various cost elements, such as salaries and benefits; guards’ training and certification; law enforcement equipment (e.g., computers, uniforms, and mobile radios); and mega-center (dispatch center) services. In addition, information about the FSA; countermeasures (i.e., contract security guards, K-9 officers); training, services; and equipment (i.e., ammunition, cell phones, and office supplies) are also required to be entered into the cost estimation model. To conduct a capability analysis, FPS determines if services—such as acquisition of guard services, training, criminal investigations, guard oversight, and a mega center—are in place at the delegated facility; how those services are provided and resourced; and whether FPS can provide those services on a reimbursable basis and, if so, how much it would cost. According to the Interim Plan, after completing the cost and capabilities analyses, FPS recommends to DHS’s Under Secretary for the National Protection and Programs Directorate (NPPD) whether a delegation should be granted, renewed, or rescinded. The Under Secretary then makes the final decision and notifies the agency requesting a delegation of authority. For delegations that are rescinded, FPS’s Interim Plan requires an orderly transition of law enforcement or guard services so that there is no lapse in protection of the facility. For delegations that are granted or renewed, FPS has responsibility for overseeing the delegations and will conduct periodic inspections to ensure that the delegated facilities are protected in a manner consistent with its contract requirements and federal physical security standards. In September 2014, FPS drafted a directive that establishes its policy and procedures and assigns responsibilities for law enforcement and contract security guard delegations of authority. Among other things, the draft directive provides further detail on the roles and responsibilities of FPS headquarters and regional staff in reviewing delegations of authority and how FPS plans to verify that existing delegations are active, have not expired, or the facility is vacant. The draft directive also requires any agency requesting a delegation to complete a self-assessment of its security services and provide FPS with a copy of the most recent facility security assessment. As of January 2015, FPS had not set a timeframe for finalizing and implementing the draft directive. FPS’s delegations of authority program does not fully meet applicable federal standards we identified for effective program management. FPS lacks reliable data, as called for by federal Standards for Internal Control in the Federal Government, for accurately identifying the total delegations it is responsible for managing. In addition, FPS’s model for estimating the costs associated with a delegation does not fully align with the relevant leading practices outlined in GAO’s Cost Guide. Without fully meeting these standards and leading practices, FPS cannot ensure that its decisions to grant, renew, or rescind delegations of authority are based on sound data and that security resources are efficiently allocated and in a manner that leads to effective protection of federal facilities. FPS lacks reliable data for identifying the total number of delegations of authority it has granted. Specifically, FPS has not established a reliable baseline for the number of delegations of authority that have been granted since the 1980s and remain active and thus, does not know how many it needs to review and oversee to ensure that law enforcement and security services are provided at these federal facilities. The federal Standards for Internal Control state that federal agencies should have relevant, reliable, and timely information for decision-making and external-reporting purposes. As previously discussed, in its Interim Plan, FPS reported that it granted over 300 delegations to approximately 30 federal departments and agencies. During the course of our engagement, FPS began verifying these data in accordance with criteria it outlined in its September 2014 draft directive. According to the draft directive, FPS should exclude from the list of 300 delegations of authority identified in the Interim Plan, those delegations that had expired or where the delegated agency no longer occupies the facility. FPS officials also told us that rescinded delegations of authority should also be excluded. Based on its verification process, FPS officials stated that only 62 of the 300 delegations of authority identified in the Interim Plan were active delegations, as of October 2014. However, we reviewed the 62 delegations of authority FPS verified and─ based on FPS’s criteria for excluding delegations─ found that 12 were improperly included. Although FPS’s verification process was to exclude expired delegations, we found that 11 of the 62 delegations of authority it identified as active had expired, including 3 that had expired almost 20 years ago when the delegated agency was still responsible for protecting its own facilities. These 11 delegations of authority were granted to 6 departments and agencies (Departments of Commerce, Health and Human Services, Defense, State, and Treasury, and the Social Security Administration) to protect 81 facilities. Although rescinded delegations are to be excluded, we found that FPS’s validated data included a delegation that was granted to NRC but was rescinded in October 2013. That delegation also should have been excluded from FPS’s validated data because it related to four facilities that NRC officials explained they had not occupied in about 20 years. Our analysis demonstrates that while FPS continues to gather information on all existing delegations of authority, it has not established effective internal controls, such as procedures to ensure that the data on its delegations are reliable. FPS officials stated that FPS lacks reliable data on its delegations of authority, in part, due to poor recordkeeping with existing delegations. FPS officials also said that they have worked with GSA and FPS regional offices to identify documentation of existing delegations of authority, but acknowledged that this approach may not have resulted in an accurate accounting of existing delegations of authority. Without reliable data on existing delegations of authority, FPS will face challenges effectively managing its delegations of authority program. In addition, the lack of reliable delegation data makes it difficult for FPS to ensure that delegated facilities are protected in a manner consistent with federal physical security standards and to provide its stakeholders with accurate and timely information for decision-making and external-reporting purposes. FPS’s cost estimation model that it is using to analyze the costs of providing law enforcement or security services does not fully align with leading practice identified in GAO’s Cost Guide. These leading practices are the basis for developing high-quality reliable cost estimates and help ensure that the cost estimates are comprehensive, well-documented, accurate, and credible. For example, following these practices should result in cost estimates that can, among other things, be replicated and updated. According to the Cost Guide, these leading practices can guide government managers as they assess the credibility of a cost estimate for decision-making purposes for a range of programs. We have previously reported that while the Cost Guide focuses on developing cost estimates for government acquisition programs, the leading practices are generally applicable to cost estimation in a variety of circumstances, including assessing an agency’s cost estimating model. Accordingly, we applied the Cost Guide’s leading practices to FPS’s cost estimation model. Given that FPS’s Interim Plan discusses the cost estimates developed with its cost model as one of the major criteria FPS uses to determine whether a delegation of authority should be granted, renewed or rescinded, and the importance of that decision for providing efficient and effective law enforcement and security services at federal facilities, we believe that ensuring the reliability of the cost model’s estimate is paramount. We found that FPS’s cost estimation model partially aligned with practices for producing comprehensive estimates and minimally aligned with those for producing well- documented and accurate estimates. Furthermore, the model does not align with practices for producing credible cost estimates. Table 1 shows our overall assessment of FPS’s cost estimation model compared to the four characteristics. Appendix II provides greater detail on our comparison of FPS’s model with the leading practices identified in GAO’s Cost Guide. A model for developing cost estimates is considered comprehensive if, among other things, it accounts for all possible costs over an appropriate period of time and is based on documentation that defines the program and is technically reasonable, as shown in table 1. FPS’s model partially aligns with these leading practices for developing comprehensive cost estimates. For example, FPS’s model examined the costs associated with a delegation of authority over a 5-year period, which we found to be sufficient for the purposes of FPS making a decision on a delegation. In addition, an FPS official told us that the technical inputs for estimating security costs in the model are based on an FSA. However, FPS’s Interim Plan does not require that an FSA be conducted prior to or as part of the delegation review process. We found that FPS also did not conduct or require the agency to obtain an FSA for the six requests for new or renewed delegations we analyzed involving the Departments of Commerce, the Interior, and State; the FTC; and NRC before determining whether those departments and agencies should be authorized to protect their facilities. During the course of this engagement, FPS included such a requirement in its draft directive, but FPS officials did not know when the draft directive would be completed and finalized. As a result, FPS’s cost estimation model may not have a solid technical basis for estimating security costs, a limitation that can compromise the quality of the cost estimate and affect FPS’s ability to make sound decisions on whether to grant, renew, or rescind a delegation. Appendix II provides greater detail on our comparison of FPS’s model with the leading practices of a comprehensive cost estimate identified in GAO’s Cost Guide. A model produces a well-documented cost estimate when, among other things, it includes (1) documentation on the source data, (2) clearly details the model’s calculations and results so the results can be replicated, and (3) provides explanations for choosing a particular methodology, as shown in table 1. FPS’s model minimally aligns with these leading practices for producing well-documented cost estimates. For example, FPS provided documentation on some of the sources of data that are programmed into the model, such as the sources for cost data on K-9 services and vehicles. The model also provides some steps that allow an estimate to be replicated, such as including mathematically logical formulas for its calculations. However, FPS’s model did not include documentation on the sources of other cost data, such as those related to training programs or career development, how it assessed data reliability, or how the data were normalized.documentation did not describe the methodology it uses to develop a cost estimate, including a description of the methods or the costs used in its summary of the estimate. Without providing clear documentation of the data and methodology used by a model, it is difficult for a cost analyst to replicate the results and ensure that FPS’s model and process are producing reliable cost estimates based on quality data and methods. Appendix II provides greater detail on our comparison of FPS’s model with the leading practices of a well-documented cost estimate identified in GAO’s Cost Guide. A cost estimation model should, among other things, include an uncertainty analysis (a way to assess variability in an estimate to reflect unknown information that could affect cost), be updated regularly to reflect changes to the current status, and be based on a historical record of costs and actual cost data, as shown in table 1. FPS’s model minimally aligns with these leading practices for producing accurate cost estimates. For example, the model’s calculations were based on a formula—that allowed any changes—such as those related to the security requirements or the security costs of the agency requesting the delegation—to be quickly updated. However, FPS’s model and process do not include an uncertainty analysis to determine where a cost estimate falls within the range of possible costs. A model that does not assess the level of confidence associated with an estimate may not have adequate contingency funding available if the actual costs exceed the estimate. In addition, the model does not document any historical use of costs. Historical data can provide insight into actual costs, such as security costs associated with protecting similar facilities. Without including these elements of the leading practices for accuracy, the model may produce cost estimates with biased results, impeding management’s ability to make sound decisions when reviewing a delegation. Appendix II provides greater detail on our comparison of FPS’s model with the leading practices of an accurate cost estimate identified in GAO’s Cost Guide. A credible model, among other things, provides a process for cross- checking its results with independent cost estimates, quantifies the levels of risk and uncertainty, and includes a sensitivity analysis—that is, it examines the effect of changing one assumption related to each project activity while holding all other variables constant in order to identify which variable most affects the cost estimate, as shown in table 1. FPS’s model does not align with these leading practices for producing credible cost estimates. For example, the model does not include an analysis to quantify the potential risks and identify the uncertainty around key assumptions, which can undermine the credibility of an estimate. In addition, the model did not include a sensitivity analysis that identifies a range of possible costs based on varying major assumptions. FPS officials stated that the model identifies key cost drivers and examines the effect of changes to these key costs, but this analysis was not included in the model, and FPS did not provide any supporting documentation of the analysis being part of the process. Without conducting analyses on the sensitivity, risk, and uncertainty associated with an estimate and validating the methods for producing the cost estimate, FPS may not have an understanding of the limitations associated with the cost estimate, and could make a delegation of authority recommendation without understanding the credibility of the cost estimate. Appendix II provides greater detail on our comparison of FPS’s model with the leading practices of a credible cost estimate identified in GAO’s Cost Guide. An FPS official told us that the cost estimation model was not necessarily in line with GAO’s cost estimation leading practices because the agency did not think a more rigorous model was warranted given the size and scope of the delegation program. However, Office of Management and Budget officials told us that FPS faced difficulties when comparing its security costs to that of an agency requesting a delegation and in discussions with FPS officials pointed out that FPS needs to establish a transparent process, when working with an agency to estimate these costs. As such, a reliable cost model is instrumental to establishing sound cost information for making decisions on delegations of authority. As previously discussed, the leading practices in the Cost Guide are applicable to a range of programs, such as FPS’s assessment of delegations of authority, but the extent to which the leading practices apply may vary, depending on the scope and complexity of an individual delegation. For example, conducting a sensitivity analysis may involve varying the key security requirements, such as the recommended countermeasures like the number of contract guards protecting a facility, to determine how the changes affect the overall cost estimate. We recognize that the application of all of these cost estimating leading practices to FPS’s cost estimating model would take time and financial resources. However, applying these leading practices would enable FPS to better identify and address issues with developing cost estimates, and provide its management and that of the agency requesting a delegation with reliable cost information on the financial impact of granting, renewing, or rescinding a delegation of authority. We analyzed the six requests for new or renewed delegations of authority FPS reviewed from June 2012 through May 2014, and found that FPS did not fully follow its Interim Plan when it reviewed five of the requests. According to FPS’s Interim Plan, FPS should conduct cost and capabilities analyses before making a decision to grant, renew, or rescind a delegation of authority. However, as shown in table 2, FPS conducted these required analyses for only the delegation involving SSA and did not conduct them for the other five delegations involving NRC, Commerce’s NIST, Interior, State, and FTC. Without conducting these analyses, FPS does not have a sound basis to determine whether cost or security considerations support its delegation of authority recommendations. In addition, FPS faces limitations ensuring that its contract requirements and ISC’s physical security standards are being met at delegated facilities. FPS conducted cost and capabilities analyses in reviewing the SSA’s request to renew a delegation of authority for contract guard services at a level II and a level IV facility in Durham, North Carolina. According to FPS’s cost analysis, in fiscal year 2013, it would have cost SSA about $3.6 million and FPS about $4.7 million to provide the contract guard services at these facilities. According to FPS officials, FPS would need an additional $1.1 million to train its contract guards to operate SSA’s technically complex security systems. FPS also completed a capabilities analysis, which showed that FPS could provide more of the required security services than SSA. According to SSA officials, the agency did not agree with FPS’s capabilities assessment because SSA did not believe that FPS had sufficient resources to meet SSA’s security needs. In January 2014, the Acting Under Secretary for NPPD renewed this delegation for 3 years based on FPS’s analyses and recommendation. FPS did not fully follow its Interim Plan when it reviewed NRC’s 2012 request to have the delegation renewed. Specifically, FPS conducted the required cost analysis but did not conduct the required capabilities analysis. FPS’s cost analysis showed that in fiscal year 2013 it would have cost NRC $6.5 million and FPS about $8 million to provide the contract guard services at those facilities. According to FPS officials, it would need an additional $1.5 million more to hire, train, and certify contract guards. Conducting the required capabilities analysis could have provided information on FPS’s capabilities versus NRC’s in overseeing a security guard contract, according to FPS’s Interim Plan. Such an analysis is to include ensuring that guards have the required training and certifications, and conducting inspections of guards’ duty stations. During the review process, NRC officials raised questions about FPS’s ability to oversee its contract guards, in part, because of our previous reports on challenges FPS faces with overseeing contract security guards at other federal facilities. Nonetheless, in 2013, based on FPS’s recommendation the Secretary of DHS rescinded this delegation, stating it was in the best interest of the government, but provided no additional justification. Since then, among other things, FPS has been responsible for awarding the guard contract and overseeing the guards deployed at NRC facilities in Rockville and Bethesda, Maryland. In addition, FPS did not ensure that there was not a lapse in the protection of NRC’s facilities as required by its Interim Plan. FPS and NRC officials told us that, since the contract was awarded in 2013, the guard contractor has not fully been meeting the terms of the contract. For example, 41 of the approximately 100 guards (41 percent) deployed to NRC facilities do not have the required L (equivalent to secret) or Q (equivalent to top secret) security clearances, as of February 13, 2015, according to NRC officials. In addition, according to FPS and NRC officials, the guard contractor had over 3,000 hours of open (unfilled) posts in NRC’s facilities, in part, due to challenges the contractor faced with hiring and retaining guards. Based on these open posts, an NRC official estimated that the agency was due a refund of about $100,000. To address the open post issue, the guard contractor deployed guard supervisors to these posts. According to FPS officials, this type of deployment prevents the supervisors from completing their other responsibilities including conducting post inspections to ensure that guards are at their respective posts. Moreover, FPS officials told us that although the contractor deducted the costs associated with the open posts, NRC is not getting the level of security services for which it is paying and this has negatively affected NRC. For example, if there were a potential threat at any of the open posts, there would not have been a guard to counteract the threat. In January 2015, after completing the contractor’s performance assessment report, FPS’s Contracting Officer decided that although the contractor’s overall performance has been less than satisfactory, the problem with open posts has not yet risen to the level of allowing the contract to expire or terminating the contract. However, FPS’s Contracting Officer is not recommending the contractor for similar contract guard services in the future. The Acquisition Division Director of FPS concurred with this recommendation. Regarding the other 4 requests for new or renewed delegations of authority we reviewed, based on FPS’s recommendations, the Secretary of DHS and the Under Secretary of NPPD renewed the delegations of authority for the Department of Commerce’s NIST facilities for 5 years, the Department of the Interior’s Hoover Dam for 2 years, and the State Department’s Enterprise Service Operations Center facility for 2 years, and granted FTC a new contract guard delegation for 3 years; but FPS did not conduct cost or capabilities analyses prior to making these recommendations as required by the Interim Plan. FPS officials explained that FPS did not conduct these analyses, in part, because it was not able to obtain comparable cost data or limited staff prevented it from conducting the analyses before the delegations expired. FPS officials also told us that the program is evolving and that it has yet to establish management controls to ensure that the analyses are conducted. Officials from Commerce, the Interior, State, and FTC expressed some concerns to us about the quality of FPS’s security services, the amount of time it takes FPS to review a delegation of authority, and the lack of transparency associated with FPS’s review process. Nonetheless, they told us that they agreed with FPS’s decision to renew or grant their delegations because they believed FPS faces resource and capability challenges. However, FPS remains responsible for ensuring that these facilities are protected in a manner that is consistent with ISC’s physical security standards. FPS’s Interim Plan identifies its 11 regional offices as stakeholders in its delegation review process. However, in some instances, the FPS regional offices where the delegated facility is located were not involved in the agency’s delegation review process. For example, officials from three of the four regions we interviewed were not aware of FPS’s Interim Plan or its decisions to renew delegations to Interior and State; grant FTC a delegation, and to rescind NRC’s delegation. FPS officials stated that the delegations program was being managed from FPS headquarters. Moreover, officials in one FPS region said that omitting the regions from the delegations review process could result in the region’s not meeting the requirements specified in a delegation, for example, overseeing the delegation to ensure that the delegated agency is meeting ISC standards. FPS headquarters officials explained that this program is evolving and that ongoing efforts such as its draft delegation directive (which was developed subsequent to the six delegations we analyzed) clarifies FPS regions’ roles and responsibilities related to the delegation review process and oversight of delegations. However, as of January 2015, FPS officials did not provide a timeframe for finalizing the draft directive. Given that federal facilities remain targets of potential terrorist attacks or other acts of violence, it is important that FPS manages its delegations of authority program effectively. However, FPS has not effectively managed its delegations of authority program. For example, FPS does not have reliable data to identify the number of delegations of authority it is responsible for reviewing and overseeing. Developing and implementing procedures to improve the accuracy of its delegation of authority data would enable FPS to ensure that delegated facilities are protected in a manner consistent with federal physical security standards and would provide its stakeholders with accurate and timely information for decision- making. FPS has developed a process for reviewing delegations that includes a cost and capabilities analyses. However, FPS could enhance its ability to produce reliable cost estimates by aligning its cost estimation model with leading practices to ensure its estimates are comprehensive, well documented, accurate, and credible. Such an approach, would give FPS a solid technical basis for making its delegation of authority recommendations to DHS management. Cost and capability analyses play a major role in helping FPS determine whether to grant another agency the authority to protect federal facilities, but for five of the six delegations we examined, FPS did not consistently conduct these analyses before making a recommendation to DHS’s management. It is important that FPS ensure that these analyses are consistently done. Without these analyses, FPS and DHS management faces limitations in making informed decisions about how best to protect delegated federal facilities from potential terrorist attacks or other acts of violence, protection that is FPS’s responsibility. Finally, given that FPS is still in the process of finalizing its draft directive, it has an opportunity to ensure that its delegations of authority program fully aligns with federal standards for effective program management. To improve the management of FPS’s delegations of authority program, we recommend that the Secretary of Homeland Security direct the Director of FPS take the following three actions: develop and implement procedures to improve the accuracy of its delegation of authority data; update FPS’s cost estimation model to align with leading practices to ensure it produces comprehensive, well-documented, accurate, and credible cost estimates; and establish management controls to ensure that FPS’s headquarters and regional office staff conduct required cost and capability analyses before FPS grants, renews, or rescinds a delegation of authority to a federal agency. We provided a copy of a draft of this report to DHS for review and comment. DHS provided written comments, reprinted in appendix III, agreeing with the report’s recommendations. DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Director of the Federal Protective Service, the Administrator the General Services Administration, the Director of the Office of Management and Budget, and other interested parties. The report will also be available on the GAO website at no charge at http://www.gao.gov If you or your staff have any questions about this report, please contact Mark Goldstein at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our report examines (1) the extent to which FPS’s delegations of authority program meets select federal standards and leading practices for effective program management and (2) whether FPS has followed its 2012 Interim Plan in reviewing select delegations of authority. To determine the extent to which FPS’s delegations of authority program meets select federal standards for effective program management, we analyzed FPS’s 2012 Interim Plan and 2014 draft delegations of authority directive—which outline the processes FPS is currently using to identify delegations of authority granted when FPS was part of GSA and how FPS is supposed to review delegations of authority to determine if they should be granted, renewed or rescinded—against leading practices identified in applicable federal standards. We analyzed FPS’s efforts to ensure the reliability of its delegations of authority data against internal controls specified in federal Standards for Internal Control in the Federal Government that provide reasonable assurance that an agency is operating efficiently and effectively. We also reviewed FPS’s delegations of authority data as of October 30, 2014 to determine the federal departments and agencies with delegated authority, the type of delegation received (e.g., law enforcement or contract guard), the number of facilities specified in the delegation, and the status of FPS’s review. We assessed the reliability of FPS’s data by comparing it to source documents provided by FPS and interviewing FPS officials about the controls in place to ensure its reliability of FPS’s delegation data, and, found the data to not be reliable as discussed in this report. GAO, GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, GAO-09-3SP (Washington, D.C.: March 2009). documented, accurate, and credible. The extent to which the characteristics are met is determined by the extent to which the underlying leading practices for each characteristic are incorporated. The Cost Guide identifies 20 leading practices for developing a cost estimate that include underlying tasks associated with each of the four characteristics of reliable cost estimates. GAO developed the Cost Guide to assist government agencies as they develop, manage, and evaluate the costs of capital projects. Although FPS does not directly implement or oversee implementation of capital projects at federal facilities, the agency develops cost estimates as part of its delegation of authority review process (through its cost estimation model) and needs reliable cost estimates to inform DHS’s decisions about whether to grant, renew, or rescind a delegation. As a result, most of the leading practices are applicable to the assessment of FPS’s cost estimation model. However, we found that three leading practices and one of the underlying tasks associated with the leading practices were not applicable, in part, because we were assessing a cost model rather than a cost estimate for an acquisition. Specifically, since we did not evaluate a cost estimate, we did not assess (1) the consistency of the technical baseline with the data cost estimate, (2) any mistakes in the costs estimate, or (3) if the estimating technique was used appropriately in the cost estimate. In addition, we did not assess earned-value- management reporting as it was not applicable to FPS’s delegation assessment process. For one leading practice, including all lifecycle costs, we adjusted the time period to reflect a shorter period that was sufficient for FPS’s decision-making needs for a delegation of authority. We also interviewed officials from FPS and the Office of Management and Budget about FPS’s process for reviewing delegations of authority. To determine whether FPS followed its Interim Plan in reviewing select delegations, we conducted case studies of the six requests for new or renewed delegations FPS reviewed from June 2012 through May 2014. These delegations involved the Department of the Interior’s Hoover Dam, the Department of State’s Enterprise Service Operations Center, the Department of Commerce’s National Institute of Standards and Technology, the Federal Trade Commission, the Nuclear Regulatory Commission, and the Social Security Administration. For each of our six case studies, to the extent available, we reviewed the delegation of authority, cost and capabilities analyses, and interviewed officials from FPS’s headquarters and 4 of its 11 regions. We selected these regions because the delegated facilities are located in these regions. We also interviewed officials from the delegated departments and agencies to obtain information on FPS’s review of their delegations and how FPS’s recommendations may have affected the protection of their facilities. Our case studies are not generalizable but provide insights into FPS’s ability to follow its 2012 Interim Plan in delegations of authority. We conducted this performance audit from January 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We assessed FPS’s Cost Estimation Model using the GAO Cost Guide’s framework of the four characteristics—comprehensive, well-documented, accurate, and credible—associated with high-quality, reliable cost estimates. Specifically, we assessed FPS’s cost model based on most of Table 3 the leading practices associated with these four characteristics.provides greater detail on our comparison of the model with the leading practices that are aligned with the four cost estimating characteristics. In addition to the contact named above, Tammy Conquest, Assistant Director; Karen Richey, Assistant Director; Jennifer DuBord; Sharon Dyer; Geoff Hamilton; Delwen Jones; Abishek Krupanand; Steve Martinez; and Kelly Rubin made key contributions to this report. | FPS's primary mission is to protect the almost 9,000 federal facilities that are held or leased by the General Services Administration. FPS also manages the Department of Homeland Security's (DHS) delegations of authority (delegations) program, which involves, among other things, reviewing requests by agencies to protect their own facilities instead of FPS and making recommendations to DHS about whether to grant, renew, or rescind such delegations. In response to direction in the conference report accompanying the Consolidated Appropriations Act, 2012, FPS prepared its Interim Plan that outlines FPS's process for reviewing existing and newly requested delegations. GAO was asked to review FPS's management of this program. This report covers (1) the extent to which FPS's delegations program meets select federal standards and (2) whether FPS has followed its Interim Plan in reviewing delegations. GAO reviewed FPS's 2012 Interim Plan and data on delegations; compared FPS's Interim Plan to federal standards; and analyzed the six requests for new or renewed delegations FPS reviewed from June 2012 through March 2014. The Federal Protective Service's (FPS) delegations of authority program does not fully meet applicable federal standards GAO identified for effective program management. FPS lacks reliable data, as called for by federal Standards for Internal Control, to accurately identify all the delegations FPS is responsible for managing and overseeing to ensure the protection of federal facilities. Specifically, of the 62 delegations of authority that FPS officials said were verified as active, GAO found that 12 had either expired or been rescinded. Standards for Internal Control state that federal agencies should have relevant, reliable, and timely information for decision-making and external- reporting purposes. FPS officials stated that poor recordkeeping contributed to the data's unreliability, but FPS has not established procedures to ensure data reliability. Without reliable data on delegations of authority, FPS will face challenges effectively managing this program. FPS's model for estimating the costs associated with a delegation—set forth in its 2012 Interim Plan —does not fully align with the relevant leading practices outlined in GAO's Cost Estimating and Assessment Guide . These leading practices help ensure reliable cost estimates that are comprehensive, well documented, accurate, and credible. GAO found that FPS's cost estimation model partially aligned with practices for producing comprehensive estimates and minimally aligned with those for producing well-documented and accurate estimates. Furthermore, the model does not align with practices for producing credible cost estimates because, among other things, it does not include a sensitivity analysis, which identifies a range of possible costs based on varying assumptions. Without fully aligning the cost model with leading practices, FPS faces limitations developing reliable cost estimates that support its delegations of authority recommendations. For five of the six agency requests for new or renewed delegations of authority that GAO analyzed, FPS did not conduct the required cost and security- capabilities analyses before making its recommendation to grant, renew, or rescind the delegation. The Interim Plan calls for these analyses to form the basis of FPS's recommendations. Specifically, FPS conducted the required analyses for only the delegation involving the Social Security Administration and did not conduct these analyses for the other five delegations involving facilities of the Departments of Commerce, Interior, and State; the Nuclear Regulatory Commission; and the Federal Trade Commission. According to FPS officials, they were not always able to obtain, from the agency requesting a delegation, comparable cost data to complete the cost model. FPS officials also acknowledged that FPS has yet to establish management controls to ensure that required analyses are conducted. Without these analyses, FPS does not have a sound basis to determine whether cost or security considerations support its delegations of authority recommendations. GAO recommends that the Secretary of DHS direct FPS (1) to improve the accuracy of its delegation data, (2) update its cost estimation model to align with leading practices, and (3) establish management controls to ensure that its staff conducts the required cost and capability analyses. DHS concurred with the recommendations. |
On April 17, 1995, the President signed the District of Columbia Financial Responsibility and Management Assistance Act of 1995, P.L. 104-8, which established the Authority to repair the District’s failing financial condition and to improve the efficiency and effectiveness of its various agencies. The Act also permits the Authority to contract for goods and services for its own mission, contract for goods and services on behalf of District agencies, and review and approve contracts processed by District agencies. In addition, on August 5, 1997, the President signed into law the National Capital Revitalization and Self-Government Improvement Act, Title XI of P.L. 105-33. Under the Act, the Authority was directed to develop management reform plans for nine agencies and four citywide functions. The Act also required the Authority to award the management reform consultant contracts within 30 days from the date of its enactment, unless the Authority notified Congress, in which case the Authority could take 60 days. The Authority is an independent entity within the government of the District and is statutorily exempt from adhering to the District’s procurement regulations. In addition, because the Authority is not an agency of the federal government, it does not have to comply with federal procurement statutes or regulations, such as the Federal Acquisition Regulation. In March 1996, the Authority promulgated its own procurement regulations that are intended to permit the procurement of property and services efficiently and at either the least cost to or the best value for the Authority. The Authority’s contracting authority is statutorily vested in its Executive Director, who is also the designated Contracting Officer. According to the Authority’s regulations, the Executive Director may at any time waive any provisions of the regulations, with the exception of the provision regarding the avoidance of conflicts or impropriety and the appearance of conflict or impropriety. The Authority’s regulations prescribe some of the basic procurement principles, including the avoidance of conflicts or impropriety and the appearance of conflict or a preference for competition among potential sources to ensure fair and reasonable prices and best value for the Authority; use of sole source contracting only when it makes good business sense or promotes the Authority’s mission and is justified in writing and, if the contract exceeds $100,000 on an annual basis is approved by the Authority’s Chair; identification of potential sources to achieve the benefits of competition; publication of the Authority’s requirements to make potential qualified sources aware of the Authority’s requirements; preparation of statements of work that include a thorough description of the required services, a delivery schedule, and standards for measuring the contractor’s performance; and monitoring of contractor performance and certification of satisfactory performance prior to payment of contractor invoices. In addition, the Authority’s regulations prescribe procedures for simplified and formal contracting. According to the regulations, the Executive Director shall determine the type of procurement action that is appropriate for the use of simplified contracting procedures. The regulations state that simplified contracting procedures must be used when the value of the procurement is not expected to exceed $100,000 and/or when the nature of the goods or services to be provided is appropriate for these procedures. Under simplified contracting, the regulations prescribe procedures for obtaining competition, preparing written solicitations, evaluating proposals, and awarding contracts. For example, the Executive Director is responsible for making the final determination for contract selection based on the written recommendation of the technical evaluation team. The Authority’s regulations state that formal contracting procedures are mandatory for contract actions that may result in the Authority’s expenditure of $500,000 or more on an annual basis and may be used for competitive contract actions estimated at less than $500,000. Under formal contracting, the regulations prescribe procedures for preparing written solicitations, evaluating proposals, and awarding contracts. For example, the Executive Director’s decision for contract selection is required to be supportable, documented, and based on the evaluation factors. In addition, under the formal contracting procedures, the Executive Director may conduct negotiations with qualified offerors. The regulations also require that the negotiation sessions be fully documented whenever they occur. The Executive Director is also required to perform cost/price analysis when a single offer is received in response to a competitive solicitation or when the contract will not have a fixed price. The regulations further state that when fair and adequate price competition is obtained, a comparison among proposed prices and to the Authority’s estimate is generally adequate to verify that the prices offered are reasonable. Other than some requirements on the preparation and use of statements of work, the Authority’s regulations do not prescribe specific requirements governing contract actions between $100,000 and $500,000, nor do they set forth specific requirements governing contract modifications or contract options. The Authority also promulgated regulations in November 1995 for reviewing and approving contracts submitted by the District government. These regulations describe in detail the proposed contracts that are required to be submitted to the Authority for review and approval. Examples include sole source contracts, contracts for services exceeding $25,000, and any contract proposed as an emergency procurement. The regulations further state that no contract that is required to be submitted to the Authority shall be awarded unless the Authority has approved the proposed contract or unless the Authority specifically declined to exercise its power to review and approve the contract prior to award. Subsequently, most recently on February 26, 1998, the Authority adopted resolutions amending the regulations by modifying the definition of contracts required to be submitted for review and approval. According to the Authority’s procurement regulations, the Executive Director may from time to time delegate specific contracting and procurement responsibility and authority to various members of the Authority’s staff. The Authority’s regulations also require that when authority is delegated to a staff member to serve as a contracting officer, the delegation is to be in writing. Prior to reorganizing in December 1997, the Authority’s contracting staff consisted of a Director of Procurement and full-time complement of five staff persons, including a Procurement Analyst and two Contract Specialists. In early 1998, the Authority changed the scope and magnitude of its procurement operations by reducing the number of procurements done to support its own mission and reducing the number of District contracts to be reviewed and approved. As of April 1999, there were two full-time staff— a Senior Procurement Specialist and an independent contractor who served as the Contract Specialist— involved in the award and administration of Authority contracts. The Authority’s Executive Director, Deputy General Counsel, and Chief Financial Officer also assisted these staff members. In addition, the District’s CPO also awarded and administered several Authority contracts on behalf of the Authority. In January 1998, the Authority hired a CMO to assist the Authority in carrying out its management reform responsibilities. The CMO reported to the Chairperson of the Authority and was responsible for overseeing the management reform efforts for nine District agencies and four citywide functions, including procurement. In February 1999, the CMO resigned from her position. From its inception in April 1995 through September 30, 1998, the Authority reports that it awarded 141 contracts for almost $81 million. These contracts include procurements done by the Authority to accomplish its own mission or done by the Authority on behalf of the District. We reviewed a total of 12 contracts and their associated contract actions that were awarded in fiscal years 1996 through 1998. Ten of the 12 contracts were awarded by the Authority and the other 2 were awarded by the District’s CPO. As stated previously, although we reviewed a total of 10 contracts awarded by the Authority, we assessed compliance with the Authority’s regulations for 9 contracts because 1 contract (Thompson, Cobb, Bazilio and Associates, contract number FY96/FRA#2) was awarded before the Authority’s regulations were adopted in March 1996. As you specifically requested, we focused on the contracts that were awarded for the Authority’s former CMO and to Thompson, Cobb, Bazilio and Associates. According to the Authority, 17 of its 141 contracts were awarded on behalf of its former CMO; we judgmentally selected six of those contracts to obtain a mix of management reform and executive recruitment services contracts. We selected the other six contracts because you specifically requested that we examine them. They include the four contracts awarded to Thompson, Cobb, Bazilio and Associates by the Authority and the two contracts awarded to Smart Management Services by the District’s CPO. Appendix I provides additional information on the contracts we reviewed, and appendix II contains additional information on the award and administration of the contract awarded prior to the adoption of the Authority’s regulations. We reviewed the contract files to determine whether the Authority and the District’s CPO followed applicable procurement regulations when they awarded the contracts we assessed. For example, we reviewed the contract files to determine whether (1) competition was sought, (2) the basis for contract selection was documented, (3) sole source contracts had written justification, (4) contractors’ performance was monitored, and (5) the Authority received the required deliverables before payment of invoices. To supplement our contract file review, we judgmentally selected three of the eight contractors who were retained by the Authority and the District’s CPO to obtain a mix of contractors who were required to provide management reform or executive recruitment services and visited them at their offices to obtain information on the Authority’s procurement process. For the contract that was awarded prior to the adoption of the Authority’s regulations, we reviewed the information in the contract file to determine what information was available to document key contract award and administration decisions, including the basis for contract selection and whether the file contained evidence that the Authority received the services it paid for. As stated previously, the regulations provide that the Executive Director shall determine whether a particular request for procurement is appropriate for simplified contracting. However, we found no documentation in the contract files that this was done. Consequently, it was not apparent which method of contracting was used by the Authority to award Boulware a $105,000 contract because the regulations do not specify which procedures, simplified or formal, apply to contracts that are between $100,000 and $500,000. In addition, we reviewed the Authority’s and District’s procurement regulations and procedures, the Authority’s review and approval regulations governing submission by the District for contracts, and interviewed Authority and District officials involved in contract award and contract administration. We also reviewed several reports of studies done by other entities on the Authority and District’s procurement process.However, as agreed with your offices, we did not review the Authority’s process or controls for ensuring that its review and approval regulations governing District contracts were being followed. Although our findings can only be applied to the contracts we reviewed, other reviews of the Authority and District’s procurement processes have reported similar findings and conclusions. For example, DSIC reviewed over 100 Authority contracts that totaled $47.2 million and were awarded between August 1995 and September 1998. We conducted our review in Washington, D.C.; Houston, TX; and Chicago, IL; from September 1998 to July 1999 in accordance with generally accepted government auditing standards. We obtained comments on a draft of this report from the Authority and the District’s CPO. These comments are summarized in the agency comment section of this report and are discussed in the report where appropriate. Appendix III contains the Authority’s written comments and our specific responses to those comments. Although the Authority’s procurement regulations set forth some basic requirements for contract award, we found that the Authority did not always comply with its procurement regulations or follow sound contracting principles for the nine contracts that we assessed. As stated previously, the Authority’s former Executive Director was able to waive almost any provision of the regulations; however, he stated that a waiver was not granted for any of the contracts awarded by the Authority. In its comments on a draft of this report, the Authority said that our statement that “according to the former Executive Director, the provisions in the procurement regulations have never been waived” is not quite accurate. The Authority commented that its former Executive Director said that its regulations had never been waived in writing. The former Executive Director did not make this distinction when we met with him. While the Authority’s regulations do not state whether the waiver has to be in writing, we disagree with the Authority’s position that once a contract is executed by its Executive Director and approved by the Chair, any irregularities with respect to its award have been waived. The failure to follow the Authority’s regulatory requirements could occur at several stages in the contracting process, and the Executive Director may not necessarily be aware of what regulatory requirements his contracting staff may have failed to follow. If the execution of a contract by the Executive Director constitutes a waiver of any Authority contracting requirement, regardless of whether the Executive Director knew of a contracting deficiency, there would be essentially no accountability for the actions of the Authority or its employees. Such a process would, in effect, render the regulations useless. The contract files we reviewed indicated that the Authority sought competition for seven of the nine contracts we assessed. However, the contract files contained little or no evidence that the Authority (1) documented its basis for contract selection for the three contracts where it is specifically required by the regulations; (2) prepared written justification for one sole source contract award or a series of “modifications” to another contract that, in effect, was a sole source award; or (3) documented its contract negotiations as required by the regulations for the two contracts where the Authority stated that negotiations had occurred. After we completed our review of the ten contract files, we notified the Authority of missing documents and requested that they be provided. On May 21, 1999, the Authority’s former Executive Director provided us with a letter to explain how he made his contract selection decisions, but did not provide any additional documentation. Of the 9 contracts we assessed, we found that the Authority did not document its basis for contract selection, as specifically required by its regulations, for the three contracts that were awarded to Managing Total Performance, Management Partners, and the Urban Center. The Authority’s regulations require the Executive Director’s decision for contract selection to be supportable, documented, reasonable, and based on the technical evaluation report for contracts that total $500,000 or more. For example, there was no evidence in the contract file documenting the Authority’s basis for awarding to Managing Total Performance a $796,600 contract for phase I management reform work or adding $10.6 million in modifications to this contract. The contract file also contained information that indicated that the Authority received several other proposals but contained no documentation explaining why Managing Total Performance was selected or the other proposals were not selected. In addition, under simplified contracting, when written proposals are received the evaluation panel is required to document the basis for its initial recommendation for contract selection, including a brief description of why the recommended proposal offers the best value of all proposals received. The evaluation panel’s basis for its initial recommendation for contract selection was not documented in the contract files for the four contracts where simplified contracting procedures applied. For example, the Authority awarded a $54,000 contract which was later modified to $94,500 to the Gaebler Group to establish a management task force to provide management and technical assistance to its former CMO. The Authority’s contract file contained six proposals in response to the solicitation, but there was no evidence in the contract file documenting the Authority’s basis for selecting the Gaebler Group. There also was no evidence in the contract file that the other five firms were not qualified or were less qualified to provide the required services. In addition, the Authority’ s technical evaluation panel and its former CMO both initially recommended another contractor. The other cases involve the three contracts the Authority awarded to Thompson, Cobb, Bazilio and Associates totaling over $153,000 to audit its financial statements and the enrollment in the District’s public schools. However, the Authority did not document its basis for selecting this particular contractor for any of the three contracts. The absence of a clearly documented selection process left no written record to review the basis for contract selection for the contracts we assessed or to determine whether the awards were made at the lowest cost or best value and whether offerors were treated fairly. In response to our request for the basis for contract selections for the contracts we reviewed, with respect to the former CMO contracts, the Authority’s former Executive Director said that the proposals submitted were evaluated by the selection committee. However, the final decisions concerning contract awards to vendors, the acceptability of individuals proposed as members of the team, and the tasks to which teams and individuals were assigned were made by the former CMO. In addition, the former Executive Director specifically acknowledged that the Gaebler Group was not the recommendation of the selection committee and said that the former CMO determined that she needed additional management assistance and believed that the Gaebler Group could perform the tasks within the time constraints. There was nothing in the contract file to explain the former CMO’s position. The former Executive Director also said that he determined that it was in the Authority’s best interest to approve the $10.6 million in modifications to the Managing Total Performance contract, even though the total price of the modifications was greater than the original contract price, because the Authority and District agencies had already fallen behind in implementing management reform. Finally, the former Executive Director said that he awarded the three contracts to Thompson, Cobb, Bazilio and Associates based on recommendations from the Authority’s contracting staff and his personal knowledge and experience with the firm. In commenting on a draft of this report, the Authority said that the basis for contract selection for the contracts awarded to the Gaebler Group, Management Partners, the Urban Center, and Managing Total Performance is contained in memorandums dated March 18, 1998, and September 4, 1997. However, these documents do not contain the Executive Director’s basis for contractor selection. In addition, the contract files contained no explanation of the difference between the evaluation panel’s recommendation and the selection of the Gaebler Group as previously discussed. The Authority commented that the Executive Director’s signature on the contract as the contracting officer constitutes documentation for the basis for contract selection. The Authority also believes that the award of a contract in accordance with the recommendation of the selection team is an adoption of that recommendation and is thus the basis for contract selection. We agree in cases of simplified contracting where the Executive Director accepts the panel’s recommendation that the Executive Director’s signature on the contract constitutes documentation of the basis for contract selection, as asserted by the Authority. However, according to the Authority’s regulations, specifically chapter 5, section F.1., the Executive Director is required to prepare a memorandum detailing the procurement and the rationale for the contract selection for contracts over $500,000. Therefore, under these formal contracting procedures, the Executive Director’s signature on the contract would not satisfy this regulatory requirement. We found that the Authority did not comply with its regulations when it awarded one sole source contract and executed a series of “modifications” to another contract that became, in effect, a sole source award. The Authority’s regulations require that all sole source contracts be accompanied by a written justification and, if the contract exceeds $100,000 on an annual basis, be approved by the Authority’s Chair. However, we found that the Deputy Management Officer for the Authority’s former CMO entered into a verbal agreement on a noncompetitive basis without written justification or the Authority Chair’s approval. The contractor, Boulware, was to provide executive recruitment services for six senior-level management positions that were already included in the scope of work for another contract. Authority officials said that the verbal agreement was an unauthorized procurement but later ratified the agreement and awarded a $105,000 sole source contract to Boulware. According to the written justification that was prepared 3 months after the verbal agreement, the Authority’s basis for the sole source award was twofold. First, the original contractor was not performing in accordance with the terms of the contract; however, we found nothing in the original contractor’s file to substantiate this assertion. Second, as stated in the justification the selected firm was the only firm with the requisite knowledge and skills to perform the required services; however, this assertion was also not substantiated by any documents in the contract files. To the contrary, documentation in the Boulware contract file suggests that Boulware’s original proposal to perform similar services was initially rejected by the Authority because it contained the highest hourly rate among the five proposals received in response to another solicitation, according to Authority officials. In addition, there was nothing in the contract files that indicated that the other firms were not qualified or were less qualified to perform the required services. It should also be noted that our review of the Authority’s justification for the noncompetitive procurement to Boulware determined that, the contract files contained conflicting information. The Authority’s April 24, 1998, justification for awarding a sole source contract to Boulware to provide search and recruitment services for six positions stated that the current contractor, PAR Group, working for the Authority in the area of executive recruitment, had been unable to deliver candidates within the desired time frame, which affected the CMO’s office and other District agencies. The justification further stated that, as a result of PAR Group’s poor performance, it was necessary to enter into a contract with a firm that had a track record for performance in the area of executive recruitment. However, also in the PAR Group contract files was another Authority justification dated the same day—April 24, 1998—for noncompetitive procurement of a proposed modification to expand the PAR Group’s search and recruitment activities to include six additional positions. The justification stated that the PAR Group was doing an excellent job in a cost-effective and timely manner. Further, the justification said that, under these circumstances, it was considered unlikely that another contractor, unfamiliar with the proposed work, would perform the required tasks as cost effectively or in as timely a manner as the PAR Group had done. According to the Authority, the conflicting dates on the memorandums were the result of a typographical error. In reference to the two sole source justifications for the PAR Group and Boulware, the Authority commented that a comparison of the two justifications is initially confusing and said that the date of the Boulware sole source justification is incorrect and is a typographical error. We agree that the two sole source justifications are confusing and brought this to the Authority’s attention on several occasions during our review. However, the Authority did not provide us with a definitive response until we received its written comments on the draft. We revised our report to reflect the Authority’s comments. Notwithstanding the Authority’s explanation of the dates, our point is that the sole source justification for Boulware was based in part on the Authority’s statement that the PAR Group was performing poorly. However, nothing in the PAR Group contract file showed that PAR Group was performing poorly as asserted in the sole source justification. Further, there is nothing in the contract file to support the former Executive Director’s assertion that the Authority’s Board had imposed very tight 30- day schedules for filling certain positions. Additionally, the PAR Group contract did not contain any evidence of the cited 30-day schedule for filling the positions. In another contract, the Authority did not substantiate the award of sole source contracts to Managing Total Performance. On September 4, 1997, the Authority awarded a $796,600 management reform contract to Managing Total Performance with a base term of 90 days. This contract also provided for an option and further provided that, if the option were exercised, the option term of the contract was from December 1, 1997, through December 1, 1998. Authority officials confirmed that this modification was not exercised by December 4, 1997, when the contract expired. When a contract has expired, the contractual relationship that existed is terminated and that the issuance of a modification after the expiration date, in effect, would be the award of a new sole source contract.However, the Authority did not treat this award as a new sole source contract or justify it in writing, and there was no evidence of approval by the Authority’s Chair in the contract file, as required by the Authority’s regulations. Further, according to Authority officials, the District’s CPO, who signed the modification that purported to exercise the option, was authorized to prepare the proposed modifications for the Authority. However, Authority officials said that they did not intend for the District’s CPO to execute modifications without the Authority’s approval because the contract was an Authority contract. In explaining this situation to us on June 17, 1999, Authority officials said that the Managing Total Performance contract was similar to several other management reform contracts awarded by the Authority. These contracts, they said, were intended to have two phases—development of proposed reforms and the implementation of proposals accepted by the Authority; however, events did not turn out entirely as planned. They said that phase I resulted in many more proposals than could be funded. Consequently, the Authority had to analyze them and decide which ones to approve. At the same time, Authority officials said that they were under a lot of pressure from Congress and others to move more quickly toward producing results. Accordingly, they asked the District’s CPO to perform the administrative tasks necessary to modify the contracts to proceed with the implementation phase. However, while these actions were under way, Authority officials said the Managing Total Performance contract expired. Finally, Authority officials said that they did not realize that the District had not done or documented cost/price analysis or negotiations for modifications 1 through 14 of the Managing Total Performance contract. In written comments on a draft of this report, the Authority questioned our conclusion that it failed to “substantiate the award of sole source contracts to Managing Total Performance.” As recognized by the Authority, this conclusion was based on our view that the initial Managing Total Performance contract, awarded on September 4, 1997, with an option clause, had expired before the option was exercised. We concluded that since the contract had expired, the issuance of a modification exercising the option after expiration, was in effect the award of a new sole source contract that should have been justified in writing and approved by the Authority’s Chair. The Authority stated that it does not interpret the Managing Total Performance contract as having expired. It further stated that for a variety of reasons, the Authority and Managing Total Performance “understood and agreed” that the contract would remain in effect beyond the stated term in order to allow for the future exercise of options for implementation work. The Authority further stated that it, not GAO, is “the most appropriate interpreter” of what its contracts provide and noted, as we did in the report, that the Authority is exempt from District and federal procurement law. We do not agree with the Authority’s position. The Authority suggests, without actually stating so, that the Authority and Managing Total Performance had an oral agreement to extend the contract beyond its stated term. However, we found no evidence or documentation in the contract file to suggest when the Authority and Managing Total Performance might have reached this agreement to extend the contract or to show that such an understanding and agreement existed. The letter from the Executive Director to the District’s CPO, dated months after the contract had expired, authorizing him to process modifications for the Managing Total Performance contract and the subsequently issued modifications contain no reference to a prior extension of the contract by oral agreement. In essence, the Authority has asked us to accept that the contract had been extended, not based on any additional documentation, but rather on its current explanations of its past intentions. The Comptroller General decision we refer to in the report is cited for the proposition that, as a matter of general contract law, not federal or District procurement law, the attempt to exercise an option on an expired contract can only be viewed as the execution of a new contract. When a contract expires, an unexercised option provision that was part of the contract would expire as well. The Authority’s view— that, despite the lack of evidence in the contract file, we should not question its statement that the contract was extended— highlights the problems caused by the Authority’s failure to document key contract actions. If these actions are not documented, there is no way for the Authority, or any organization reviewing its actions, to know whether it followed its own regulations and the provisions of its own contracts. Also, the lack of adequate documentation makes it difficult to hold the Authority or its employees accountable for their actions. Of the nine contracts we assessed, the Authority’s former Executive Director said it conducted contract negotiations for 2 of the 3 contracts awarded under the formal contracting procedures which require the documentation of negotiations whenever they occur. However, there was no documentation of negotiations in the contract files for the contracts awarded to Management Partners and the Urban Center for $513,000 and $562,800, respectively. Based on the Authority’s regulations, these 2 contracts should have been awarded using the formal contracting procedures because they were for $500,000 or more. In another case, the Authority executed 14 modifications totaling $10.6 million to an expired contract with Managing Total Performance, which, in effect, constituted a sole source award. Since the Authority erroneously viewed these actions as modifications to an existing contract, the contract files contained no documentation of negotiations, cost/price analysis, or other steps that may have been taken to determine best value or least cost or would be required for the award of a new contract. While the Authority’s regulations state that the Authority is to provide goods and services at the least cost or representing the best value for the Authority, the regulations do not specify how to accomplish these objectives when it executes contract modifications. In addition, although the regulations do not require negotiations or documentation of negotiations whenever they occur for contracts under $100,000, the former Executive Director said that the Authority conducted negotiations with qualified offerors for four of the remaining six contracts we assessed. However, evidence in the contract files indicated that negotiations occurred for only one of the four contracts for which the Authority said it conducted negotiations. This was a contract with Boulware for which a contract approval form stated that the Authority’s Chief Financial Officer negotiated down Boulware’s proposed rates and terms of the contract to the extent possible. However, the contract files did not contain a record of the negotiation process, and the contractor told us that negotiations did not take place and that the Authority’s Chief Financial Officer dictated the price. In its comments, the Authority said that it believes that the dictation of a maximum price is included in the definition of negotiations. While we agree, our purpose was to describe the nature of the negotiation and to point out that the documentation in the contract file did not describe the nature of the negotiation that took place or the Authority’s rationale for arriving at the dictated price. Nonetheless, we recognize that the contractor could have said that the price was too low and then attempted to negotiate or simply declined the contract. While the Authority’s regulations do not require independent cost estimates for all of its contracts, the regulations do authorize the Authority to develop its own cost/price estimate to help assess the reasonableness of contractor proposals. For example, the regulations state that, when fair and adequate price competition is obtained, a comparison among proposed prices and to the Authority’s estimates is generally adequate to verify that the prices offered are reasonable. The contract files for two of the three contracts we assessed where the former Executive Director said price comparisons were performed did not contain documentation of these comparisons to show how the Authority determined price/cost reasonableness. DSIC also reported that contract negotiations were generally not documented for several of the contracts it reviewed and that cost/price analyses were frequently not documented. DSIC also found little evidence that the Authority prepared or used independent cost estimates for several contracts and pointed out that the number of hours proposed by some offerors, within the competitive range, differed by as much as 50 percent. According to DSIC, the absence of an independent cost estimate makes it difficult to reconcile differences of such magnitude. DSIC recommended that the Authority develop independent cost estimates of the hours needed to perform required services to use as a basis for evaluating technical proposals and costs. In his May 21, 1999, letter, the Authority’s former Executive Director said that the Authority’s staff obtained and evaluated cost and pricing information and that, after negotiations by the staff, he determined that the prices were fair and reasonable for 9 of the 10 contracts we reviewed. However, he did not provide any additional documentation or other evidence of actual negotiations or cost/price evaluations. In commenting on a draft of this report, the Authority said that contract negotiations, a cost/price analysis, or an independent cost estimate are not mandatory for all of the contracts we assessed. Although we did not say that these were mandatory for all the contracts we assessed, we further clarified our report in this regard. However, our point continues to be that we did not see any documentation of negotiations the Authority said occurred. We believe that contract negotiations, cost/price analyses, and an independent cost estimate are important tools for ensuring best value and fair and reasonable prices and thus represent good contracting practices. The Authority also commented that the provision for cost/price analysis in its regulations does not require that cost/price analysis be documented in the contract file. In addition, the Authority said that most of its contracts reviewed by GAO are competitive and that documentation for the cost/price analysis is contained in the cost proposal submitted by offerors. We agree that the Authority’s regulations do not specifically require that the cost/price analysis be documented in the contract file, and our report does not state that it is a requirement. We also agree with the comment regarding competitive contracts; however, we question the Authority’s assertion that an offeror’s price proposal constitutes a cost/price analysis by the Authority. The District’s CPO did not comply with the Authority’s or the District’s procurement regulations when he entered into an emergency sole source contract totaling $153,800 and when he awarded a subsequent contract for $893,000 as an emergency sole source contract without justifying the emergency procurement or obtaining approval from the Authority. Both of these contracts were to Smart Management Services to provide management reform services to the Authority’s former CMO. Concerning the first contract, according to the District’s CPO, in February 1998, he received an oral procurement request from the Authority’s former CMO to obtain consulting services to assist her with reconciling the District’s fiscal year 1998 budget and management reform anomalies. According to the District’s CPO, the former CMO provided him with the names of five or six firms that she considered qualified to perform the tasks and said that she needed a firm that could start to work immediately. The District’s CPO said that he phoned the firms on the list and that only one firm–Smart Management Services—was available to start to work immediately. However, he did not maintain a record of his telephone conversations with the firms. He said that a list of the firms was not retained because the initial contract was processed as a sole source procurement. Shortly thereafter, the District’s CPO entered into an emergency sole source contract totaling $153,800 with Smart Management Services without either justifying how the procurement met the terms of an emergency procurement, as the District’s procurement regulations require, or obtaining the Authority’s approval. The Authority’s review and approval regulations for District contracts require that all sole source contracts and modifications issued under the direction of the District’s CPO be submitted to the Authority for review and approval prior to award. In addition, the District’s CPO did not comply with District procurement regulations when he modified the purchase order agreement three times to increase the scope of services and costs. The District’s procurement regulations state that contracts done on an emergency basis are not to be modified to expand the scope or extend the time of the procurement unless a limited number of additional services are needed to satisfy an ongoing emergency requirement. The contract file for the second contract, which was also awarded to Smart Management Services 4 months after the first emergency sole source contract, did not contain evidence that the District’s CPO justified how the procurement met the terms of an emergency procurement as the regulations require or submitted the $893,000 emergency sole source contract to the Authority for review and approval. The contract required Smart Management Services to provide consultant services to the Authority’s former CMO for a 1-year period. The Authority’s review and approval regulations for District contracts specifically require that sole source contracts and contracts for consultant services issued by or under the direction of the CPO be submitted to the Authority for review prior to award. The District’s regulations define an emergency procurement as one responding to a situation, such as a flood, epidemic, riot, or other reason set forth in a proclamation by the Mayor, that creates an immediate threat to the public, health, welfare, or safety of its citizens. Moreover, under the District’s procurement regulations, an emergency procurement is limited to not more than 120 days, and the contracting officer is required to initiate a separate nonemergency procurement if a long-term requirement for services is anticipated. In comments on a draft of this report, the District’s CPO said that the draft report incorrectly links the term “emergency” to the regulatory context of fire, flood, or endangerment to public health when no such context was cited or intended and said that the justification was clearly stated in writing. We disagree and believe that the District’s procurement regulations, which were used by the District’s CPO as the basis for justifying the emergency sole source contract, are specific on what constitutes an “emergency” procurement as stated in our report. In addition, our draft report states that the written justification did not explain how the procurement met the terms of an emergency procurement as required by the District’s regulations. According to the District’s CPO, he was advised by his General Counsel, after consulting with the Authority’s Deputy General Counsel and Chief Financial Officer, that the contract did not have to be submitted to the Authority for approval because the contract, which obligated approximately $330,000 during fiscal year 1998, was less than the $500,000 threshold specified in the Authority’s February 26, 1998, resolution, which requires District contracts in excess of $500,000 to be submitted for review and approval. We believe that, based on Section 4.1.E of the Authority’s review and approval regulations governing District contracts, the District’s CPO was required to submit this contract to the Authority for its review and approval. Section 4.1.E states that all proposed sole source contracts awarded by the CPO must be submitted to the Authority prior to award. In commenting on our finding that the Smart Management Services contract for $893,000 should have been submitted to the Authority for review and approval, the District’s CPO commented that the value of the contract was less than the $500,000 approval threshold prevailing at the time and therefore did not require Authority review and approval. This is not consistent with our understanding of the regulations or the value of the contract. As our report states, our basis for concluding that the District’s CPO was required to submit this sole source contract to the Authority for review and approval is Section 4.1.E of the Authority’s review and approval regulations for District contracts. In August 1998, the Authority terminated this contract because it believed that the contract contained several deficiencies. In particular, the Authority stated that the contract was awarded on a sole source basis and that under federal statutes and Authority resolutions, the Authority should have approved it. The Authority also said that it appears that the principal consultant who performed the main task under the contract was designated as a Deputy Management Officer reporting directly to the CMO and spent most of her time in a staff function. Thus, the Authority concluded that the compensation terms for the principal consultant and the two additional senior consultants were in excess of the levels that could be paid and justified for even the most senior positions in the District government. This same contract was also the source of an investigation by the District’s Office of Inspector General at the request of the Authority. The Inspector General issued a report on the results of the investigation and concluded that the District’s CPO was required to submit the $893,000 emergency sole source contract to the Authority for approval, but failed to do so, and also improperly awarded the contract as an emergency procurement. With regard to the submission of the contract to the Authority for review and approval, the Inspector General considered the Authority’s February 26, 1998, resolution to be clear on what type of contracts are required to be submitted to the Authority for review and approval, and we agree. Further, the Inspector General said that the District’s procurement regulations have their own very strict definition of an emergency. For example, an emergency includes such conditions as a flood, epidemic, riot, or other reason set forth in a proclamation by the Mayor. As such, the Inspector General concluded that the CPO acted outside the scope of the District’s procurement regulations when he awarded the $893,000 contract as an emergency sole source contract because the situation did not constitute an emergency as prescribed in the regulations. However, because the Authority subsequently terminated the contract in August 1998, the Inspector General did not recommend any further action and deferred the issue to the Mayor for final disposition. Contract administration constitutes an integral part of the procurement process that ensures that the government gets what it paid for. It involves those activities that are performed after a contract has been awarded to determine how well the contractor performed with regard to meeting the requirements of the contract. The Authority’s procurement regulations do not contain detailed provisions on contract administration. The regulations state that the Authority plans to monitor contractor performance and certify satisfactory performance prior to payment of any contractor invoice. We saw little or no evidence of how the Authority monitored or certified satisfactory contractor performance for the nine contracts we assessed. According to Authority officials, they relied on the contractor’s work statements to monitor the contractor’s performance. However, we found that the statements of work for these nine contracts generally did not contain thorough descriptions of the required services, expected results, and standards for measuring the contractor’s performance and effectiveness as required by the Authority’s procurement regulations. For example, we found that three separate firms had contracts with the same statements of work that required them to “develop and execute strategies for implementing existing management reform and improvement projects and work with, and within agencies to develop an overall operational improvement strategy.” Additionally, the work statements for these three contracts did not have standards for measuring the contractor’s performance as required by the Authority’s regulations. The development of statements of work is important because they provide a basis for monitoring the contractor’s performance to ensure that the contractor has performed satisfactorily and delivered the required goods and services before payment of invoices. Equally important, for the nine Authority contracts we assessed, the Authority contracted and paid for goods and services totaling $13 million; yet, there was no evidence in the contract files that it received the required deliverables for three of the nine contracts. The Authority’s contract files contained evidence indicating that it received the required deliverables for four contracts. For two of the five contracts where there was no evidence in the contract files, we relied on the documentation maintained by two of the three contractors we visited to determine whether the contractor provided the required deliverables. Those two contractors provided us with copies of their required deliverables that indicated that they met the terms of their contracts. Although the contract files for the other three contracts contained invoices, there was no evidence that they were always reviewed and approved and did not contain statements that the contractor’s performance was satisfactory, thus making it difficult to determine whether the deliverables were received for these three contracts. In commenting on a draft of our report, the Authority said that our finding that it lacked a system for contract administration is incorrect and that it has a definitive system that is understood by its staff. While the Authority acknowledges, as our report states, that its procurement regulations contain few provisions concerning contract administration, it did not provide any evidence to support its statement that it has a definitive contract administration system that is understood by its procurement staff. The Authority further states that under its system, staff are expected to keep the Executive Director and contracting staff informed of any changes, significant problems, and the general status of contract work. This system was not documented in the contract files. To the contrary, we found that, with respect to the Boulware contract, Authority staff entered into a verbal agreement without the Authority’s knowledge. In reference to our statement that we found little evidence of how the Authority monitored or certified satisfactory contractor performance for the nine contracts we assessed, the Authority commented that it has always interpreted the requirement for certification of satisfactory performance in its regulations to mean approval by cognizant Authority personnel of contractor invoices submitted for payment. The Authority also said that all contractor invoices must be reviewed and approved by the cognizant staff member. We agree that contractors’ invoices should be reviewed and approved by appropriate Authority staff prior to payment. However, we also believe that the signing of an invoice authorizing payment does not constitute certification of satisfactory performance as described by the Authority’s regulations. In addition, we noted that there were several invoices stamped paid with no apparent signature authorizing payment. For example, the Authority payment records provided to us for the Urban Center contract included copies of five checks and invoices paid to the contractor totaling $514,325. For two of the five payment records, where the invoices totaled $140,350, there was no indication on the invoices that they had been reviewed or approved. Two other checks, totaling $250,075, had no invoices to support the amount of or purpose for the payment. We noted that the file contained a document stating that the contract was terminated due to “possible fraudulent invoices.” This document was dated subsequent to the paid dates of the checks and invoices cited above. In another example, the contract file contained a payment record of an invoice for the Gaebler Group in the amount of $18,073. We noted, however, that the invoice contained in the contract file was not annotated to show that the Authority reviewed the invoice and there was no signature approving it for payment. The Authority also disagreed with our finding that the statements of work for the nine contracts we assessed did not contain thorough descriptions of the required service, expected results, and standards for measuring the contractor’s performance and effectiveness. As our report clearly states, the Authority’s regulations require that statements of work contain thorough descriptions of the required services, expected results, and standards for measuring the contractor’s performance and effectiveness. The statements of work for the nine contracts we assessed did not contain such information. The Authority further commented that with regard to the former CMO contracts, performance type statements of work were not feasible and that the management task force contracts, in essence, provided a group of personnel with municipal management experience to act as the newly appointed staff of the former CMO. We believe that the situation the Authority described is similar to a personnel situation and do not believe that performance expectations would have been unreasonable. The Authority commented that, contrary to the statement in the draft report that invoices in the contract files for the Gaebler Group, Management Partners, and the Urban Center were not always reviewed and approved, no invoice was ever paid without approval. We do not state that invoices were paid without approval. We state that there was no evidence in the contract files that invoices provided by the Authority were always reviewed and approved. We did, as previously pointed out, find instances of invoices stamped paid without annotation of approval or written certification of satisfactory contractor performance. Finally, in reference to our statement that there was no evidence in the contract files that the Authority received the required deliverables for three of the nine contracts we assessed, the Authority commented that it has never been the Authority’s practice to require that copies of deliverables and invoices be kept in the contract files. We do not state that copies of deliverables should be maintained in the contract files. However, we believe that a document in the file certifying that the contractor met the terms of the contract and provided the required deliverables is a good procurement practice. The Authority further stated that most of its contracts provide that payment be made after satisfactory delivery of specified deliverables and that it has never made such a payment without receipt of satisfactory work. Our report does not state that the Authority made payments without receipt of satisfactory work. We state that, based on our review of the contract files, there was no evidence in three of the nine contract files we assessed that the Authority received the required deliverables and that we were able to find evidence indicating that the Authority received the deliverables for the other six contracts. We found no documentation in the contract files that the District’s CPO monitored the contractor’s performance or received required deliverables for the two contracts that he awarded. The District’s procurement regulations state that it is the responsibility of the contracting officer to ensure that the contractor performs in accordance with the terms of the contract before payment of any contractor invoice. In addition, as stated previously, the Authority transferred the Managing Total Performance contract to the District’s CPO for administration. District officials told us that they have an individual who is responsible for monitoring the contractor’s performance to ensure that the terms of the contract are met before payment of invoices. However, there was no evidence in the contract file to substantiate this assertion. In commenting on a draft of this report, the District’s CPO said that the draft report incorrectly states that contract administration was the responsibility of the District’s Office of Contracting and Procurement. As our report states, according to the District’s procurement regulations, it is the responsibility of the contracting officer to ensure that the contractor performs in accordance with the terms of the contract before payment of any contractor invoice. Several factors appear to have contributed to the Authority’s contracting problems. The Authority’s former Executive Director attributes the contracting problems to the short period in which the Authority had to carry out its “massive and formidable” tasks. We do not believe that the existence of statutory timeframes should exempt the Authority from fully complying with its procurement regulations. In its January 1999 report, DSIC, which reviewed over 100 Authority contracts awarded between August 1995 and September 1998, said that the Authority generally followed its streamlined procurement regulations. However, DSIC also identified some of the same problems we did. DSIC attributes the Authority’s contracting problems, in part, to the Authority’s emphasis on achieving its programmatic mission in a short time period and its lack of procurement expertise. DSIC also identified such problems as no independent cost estimates, no documentation of actual analysis of the Authority’s declaration of fair and reasonable price for modifications and sole source contracts, inadequate training for contracting staff, and lack of documentation in the contract files. In its written comments on a draft of this report, the Authority points out that DSIC, the consultant firm retained by the Authority, concluded in its report that the Authority generally followed its procurement regulations. We acknowledge this in our report. However, we believe that it is equally important to point out that, although DSIC’s report contained many examples of the problems it found with the Authority’s procurement practices, the report did not explain the basis for the statement that the Authority generally followed its streamlined procurement for all the contracts reviewed. The report was unclear as to whether this conclusion applied to all of the 109 contracts or some portion of the contracts. In addition, DSIC officials were not able to provide any documentation to support this statement. DSIC made several recommendations to the Authority to address the problems it identified and said in its January 1999 report that the Authority had begun to act on them. In a January 13, 1999, letter to DSIC, the Authority stated that it would begin developing cost estimates of the hours needed to perform had assigned a procurement specialist to maintain its contractor files and use a standardized contract file folder and checklist to maintain accountability, had established an informal 3 week minimum response time for all its solicitations to encourage competition resulting in lower costs, and would continue to make resources available to incorporate education and training for all staff involved in its contracting activities. We believe that, if effectively implemented, the actions the Authority says it has taken and plans to take should help correct some of the problems that both DSIC and we identified. In addition, we believe other factors that were not addressed by DSIC’s recommendations may have contributed to the failure of Authority staff to follow the procurement regulations. First, while the Authority’s Executive Director delegated contracting responsibilities to various members of the Authority’s staff, he had not fully defined areas of responsibility and accountability among the contracting staff. For example, while the Authority’s former Executive Director signed the contracts as the Contracting Officer, it was not always apparent who was responsible for ensuring that key contract award and administration decisions were documented and maintained in the contract files. In its comments on a draft of our report, the Authority disagreed with our statement that its Executive Director had not fully defined the areas of responsibility and accountability among the contracting staff and that it was not always apparent who was responsible for ensuring that key contract award and administration decisions were documented and maintained in the contract files. The Authority said that members of its professional staff have always been fully aware of their contracting responsibilities. Our report points out that there was no documentation in the contract files to show who was responsible for contract administration, and the Authority did not provide any additional information with its written comments. Second, the Authority had not provided its contracting staff with guidance on how to implement its procurement regulations to ensure compliance. For example, the Authority’s regulations state that they are intended to permit the Authority to award contracts based on least cost or best value, and require that statements of work contain performance standards, contractors’ performance be monitored, and certification be provided that the contractor performed satisfactorily. However, the Authority had not issued guidance to its contracting staff on how these requirements are to be implemented to comply with the procurement regulations. Equally important, the Authority had not provided its contracting staff with guidance for awarding and administering those procurement actions not specifically covered by its regulations, such as contracts over $100,000 and below $500,000, or for executing contract modifications, or contract options. In its comments on a draft of this report, the Authority disagreed with our statement that it had not provided its contracting staff with guidance on how to implement its procurement regulations. Our report states that we found no written guidance on how the Authority’s staff was to implement its procurement regulations. In addition, when we asked the Authority for supporting documentation, none was provided. DSIC also found this to be a problem and recommended that the Authority improve its procurement process by providing standardized procedures on how to implement its procurement regulations. Finally, the lack of specific requirements in the Authority’s procurement regulations for all of its contracting activities appeared to have contributed to the problems that we found with the Authority’s procurement practices. For example, the regulations do not specify the procedures that should be followed for awarding contracts between$100,000 and $500,000, and for executing contract modifications and contract options. In addition, there was no evidence in the contract files we reviewed that the Executive Director determined the type of procurement method—that is simplified or formal-- that should be applied to the contracting situations stated above. Regarding the two contracts awarded by the District’s CPO without the Authority’s approval, we did not determine whether the Authority had an adequate mechanism for ensuring that these contracts are submitted to the Authority for review and approval prior to award. The Authority was established essentially to repair the District’s failing financial condition and to improve the effectiveness of it’s various entities. We recognize that, as the Authority has pointed out, it was a newly established organization and was expected to accomplish the majority of its tasks in a relatively short period of time, and thus had to award many contracts quickly. However, we believe that it was also important for the Authority to lead by example by better adhering to its own regulations, ensuring accountability and integrity, and by not following the same type of practices that it was established to correct in the District. We also recognize that any new organization is bound to experience start- up difficulties and take some time to operate effectively. However, the majority of the Authority’s contract actions that we reviewed were awarded almost 3 years after the Authority was established. We believe that this was a sufficient amount of time after establishment to expect an effective procurement operation that follows its own requirements and provides assurance that the objectives of its requirements are met. The actions that the Authority says it has taken or plans to take based on DSIC’s report, if effectively implemented, should help correct some of the problems both DSIC and we identified. However, we do not believe that these actions are likely to fully resolve the problems we found. They do not fully address findings that the Authority did not fully define the roles and responsibilities of its procurement staff or provide guidance to its staff on how to (1) determine best value; (2) develop performance standards for work statements; (3) monitor contractors’ performance and certify satisfactory performance; (4) document its basis for contractor selection and justification for sole source awards; and (5) provide its contracting staff with guidance for awarding and administering those procurement actions not specifically covered by its regulations, such as contracts between $100,000 and $500,000, and for executing contract modifications, or contract options. Perhaps even more importantly, we do not believe that the Executive Director’s position on waiver of the Authority’s regulations, certifying satisfactory performance, or extending and modifying an expired contract reflect sound contracting principles. We believe that in accordance with good procurement practices any waivers by the Executive Director of the Authority’s contract regulations should be justified and in writing; the basis for contract award should be documented, particularly when the selected source is different from the source recommended by the technical evaluation panel; contract files should contain a written certification, signed by an appropriate official, stating that the contractor’s performance was or was not satisfactory; and all contract extensions should be in writing and cannot be modified or extended. We did not determine whether the Authority had processes or controls to ensure that its review and approval regulations governing the submission of District contracts were being followed. However, it was apparent that the two contracts we reviewed that were awarded by the District’s CPO were awarded without being reviewed and approved by the Authority as required by the Authority’s regulations governing District contracts. To improve its contracting operations, we recommend that the Chair of the Authority require the Executive Director to (1) approve and justify all waivers of Authority contracting regulations in writing, (2) only extend contracts in writing and prohibit the Executive Director from extending or modifying expired contracts, and (3) include in contract files a written certification, signed by an appropriate official, stating that the contractor’s performance was or was not satisfactory; direct the Executive Director to (1) fully define the roles and responsibilities of the Authority’s procurement staff; (2) prepare a written plan for contracting that includes methods for ensuring compliance with the procurement regulations; (3) provide guidance to the procurement staff on areas, such as determining best value, developing performance standards for work statements, monitoring and certifying contractors’ performance, preparing written justifications for sole source awards, documenting the basis for contract selection, awarding contracts that are between $100,000 and $500,000, and executing contract modifications, or contract options; hold the Executive Director and other procurement staff accountable for ensuring that they follow the Authority’s procurement regulations; and require the Executive Director to assess whether the Authority’s processes and controls for the review and approval of District contracts prior to award are effective and, if not, make appropriate changes. On July 21, 1999, the Authority’s Executive Director provided written comments on a draft of this report that are reprinted in appendix III. Although the Authority said it would seriously consider our proposed recommendations and recognized that its procurement practices have not been perfect, it expressed concern and disagreement with portions of the draft that pertained to the contracts it awarded. The Authority did not provide any additional documentation with its written comments. The Authority said that the 10 contracts we reviewed were not a representative sample and that 5 in particular were not typical of Authority contracts in general. Our report does not suggest that the contracts we reviewed were selected randomly. To the contrary, our report describes in detail how we selected the contracts we reviewed, and discusses the circumstances surrounding the award of the five contracts awarded on the behalf of the CMO that the Authority says are not representative of how it carries out its contracting function. Our report states that DSIC, the Authority’s contractor, did identify some of the same problems we did but we do not state that these problems are representative of all Authority contracts. The Authority also commented that the draft report assumed that its regulations applied to all 10 of its contracts we reviewed. Our report does not state that the Authority’s regulations for formal contracting apply to all nine of the Authority’s contracts we assessed for compliance. However, we agree that our report was not as clear as it could have been in this regard and clarified our report to the extent we could, given that the Authority had not specified what requirements applied to contracts between $100,000 and $500,000 or for contract modifications or options. Finally, the Authority disagreed with several of our interpretations and application of its regulations, and believes that its procurement regulations and how the Authority interprets or implements them are generally adequate and appropriate in light of its situation. We continue to believe that our interpretation and application of Authority regulations are generally appropriate and that the manner in which the Authority has applied its regulations and has conducted its contracting activities in some instances is not consistent with sound contracting principles or practices. In particular, we believe that the Authority’s views regarding waivers of its regulations, certification of satisfactory contractor performance, and the extension and modification of expired contracts may prevent the Authority from meeting its contracting objectives and does not provide adequate internal controls to prevent abuses from occurring. Another problem is the lack of clarity as to what requirements apply to contracts between $100,000 and $500,000. The Authority’s comments on issues that it disagrees with us on and our assessment of the Authority’s comments are discussed as appropriate in the body of the report. We also made specific technical changes to clarify our report based on suggestions by the Authority. Finally, we have made additional recommendations to the Authority to address our concerns in certain areas. On July 12, 1999, the District’s CPO provided comments on a draft of this report. He disagreed with our findings with respect to the contracts he awarded. We believe that our findings are well documented and are correct. His specific comments and our responses are discussed in the appropriate sections of our report. We are sending copies of this report to Senator Kay Bailey Hutchison, Senator Richard J. Durbin, and to Representative James P. Moran and Representative Eleanor Holmes Norton in their capacities as Chair or Ranking Minority Member of Senate and House Subcommittees. We are also sending copies to the Honorable Anthony A. Williams, Mayor, District of Columbia; Ms. Alice Rivlin, Chair, District of Columbia Financial Responsibility and Management Assistance Authority; and other interested parties. Copies will also be made available to others upon request. GAO contacts and staff acknowledgments are listed in appendix IV. If you have any questions, please call me or Tammy R. Conquest on (202) 512-8387. Establish a management task force Audit FY96 financial statement (Internal controls) Labor hour rate Consultant services Firm fixed price Consultant services With the exception of the 2 Smart Management Services contracts, which were awarded by the District's CPO, the Authority awarded the other 10 contracts. This contract expired on December 4, 1997, because the Authority did not exercise its option. In addition, between July 8, 1998, and September 10, 1998, the District CPO, on behalf of the Authority, modified the expired contract 14 times, thus, in effect, awarding new sole source contracts. The modifications ranged in price from $39,460 to $5,250,000. As previously stated, 1 of the 10 contracts we reviewed was awarded before the Authority’s regulations were adopted in March 1996. However, the Authority did not provide us with any information on what regulations, if any, it used to award this contract. This contract was awarded to Thompson, Cobb, Bazilio and Associates on October 18, 1995, for $23,392. Thompson, Cobb, Bazilio and Associates was contracted to audit the Authority’s financial statements. The contract provided for a base period and the option to renew the contract for two additional years. During our review of the Thompson, Cobb, Bazilio and Associates contract, we found that the basis for contractor selection was not documented in the contract file, nor was there a copy of the request for proposal. In response to our request for documentation on its basis for selecting Thompson, Cobb, Bazilio and Associates, the Authority stated that this contract was an open market solicitation, meaning that only those firms that requested it were mailed a copy of the solicitation. In response to our follow-up request, the Authority stated that Thompson, Cobb, Bazilio and Associates submitted the only proposal received in response to the advertised solicitation. The Authority also said that the Executive Director’s decision to award this contract was based on the firm’s technical and cost proposals, the recommendations of members of his staff who handled the procurement, and his personal knowledge and experience with the firm. The Authority also exercised three options over the 2-year term of this contract. Although the base contract required the Authority to negotiate the terms and conditions of these options, the contract file did not contain documentation that the Authority negotiated the terms and conditions for two of the options exercised. In addition, the file did not show that the Authority prepared a contract modification to exercise these two options. A confirmation letter from the contractor was the only evidence in the contract file that the Authority exercised the first two contract options. However, for option 3, the contract file contained a follow-on contract signed by the Authority that included the terms and conditions for this option as required by the base contract. Although we did not find evidence that the Authority monitored or certified that the contractor performed satisfactorily, we noted that the contract file contained evidence that the Authority received the required deliverables for the base contract and the three options. In addition to those named above, Geraldine Beard, Alan Belkin, John Brosnan, William Chatlos, Bruce Goddard, and Seth Taylor also made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the procurement practices of the District of Columbia Financial Responsibility and Management Assistance Authority, focusing on whether: (1) applicable procurement regulations and procedures were followed in awarding and administering selected contracts on behalf of the Authority's former Chief of Management Officer and to Thompson, Cobb, Bazilio and Associates; and (2) the Authority and the District received the goods and services that they contracted and paid for in the contracts that GAO reviewed. GAO noted that: (1) the Authority did not always comply with its procurement regulations and procedures or follow sound contracting principles when it awarded and administered the 9 contracts that GAO assessed; (2) the Authority's contract files for these contracts were incomplete; (3) the files did not generally contain documentation of the key contract award and administration decisions as required by the Authority's procurement regulations; (4) as a result of the incomplete contract files, the Authority could not demonstrate that its objectives of: (a) acquiring goods and services at the lowest price or best value; and (b) treating offerors fairly were achieved for several of the contracts GAO reviewed; (5) the Authority's procurement regulations provide for a preference for competitively awarded contracts and require written justification and approval of sole source contracts; (6) the Authority's contract files contained evidence that it sought competition for 7 of the 9 contracts assessed; (7) however, contrary to regulations, the Authority did not: (a) document its basis for contract selection for 3 contracts; (b) include written justification for 1 sole source contract award or a series of modifications to another contract that, in effect, was a sole source award; or (c) comply with other requirements in several cases; (8) none of the contract files for the 9 contracts assessed contained certification or any other evidence that the contractor performed satisfactorily prior to payment of invoices; (9) for the 2 emergency sole source contracts awarded by the District government, the District's Chief Procurement Officer did not comply with the Authority's contract review and approval regulations governing District contracts or procurement regulations; (10) several factors appeared to contribute to the Authority's failure to comply with its procurement regulations; and (11) according to the Authority's former Executive Director, the magnitude of the tasks and the short timeframe in which the Authority had to complete them contributed to the Authority's procurements not being as "tidy" as the Authority would have liked. |
Oversight of federally insured state-chartered banks is provided by state bank regulators and either the Federal Reserve System—for banks that are members of the Federal Reserve—or the Federal Deposit Insurance Corporation (FDIC)—for other state-chartered banks. National bank oversight is provided by the Office of the Comptroller of the Currency (OCC). As the deposit insurer, FDIC has back-up oversight authority for all FDIC-insured banks. This authority allows FDIC to examine potentially troubled institutions and take enforcement actions, even when it is not the institution’s primary regulator. In addition to its authority over state-chartered member banks, the Federal Reserve oversees all BHCs. In accordance with a variety of federal laws and regulations, banks routinely provide federal bank regulators with reports containing information about their deposit and lending activities. These reports include the following: a quarterly financial report (call report), which is submitted to the primary an annual independent audit report (for banks with $500 million or more in assets), which is submitted to FDIC and relevant federal and state bank regulators; an annual summary of deposits report for each branch, which is submitted a statement of amounts required to be held as reserves, which is submitted to the Federal Reserve; and an annual report on home mortgage lending (for banks that originate, purchase, or receive applications for home purchase and home improvement loans and that have assets greater than $28 million in 1997), which is submitted to the bank’s primary federal regulator. In addition, as of January 1997, revisions to the Community Reinvestment Act (CRA) interagency regulations require banks that have assets of $250 million or more, or banks that are affiliates of a BHC with assets of $1 billion or more, to report to their regulators some new data. These banks are required to annually report, by geographic location, the aggregate number and aggregate amount of small business and small farm lending loans originated or purchased, and the aggregate number and aggregate amount of community development loans originated or purchased. BHCs are also required to submit to the Federal Reserve quarterly financial reports (Y-9 reports) on the consolidated activities of their bank and nonbank subsidiaries. Federal bank regulators, along with other agencies, typically use the lending and deposit information gathered in these reports and special purpose reviews to carry out their oversight responsibilities. Congress gets information through a variety of means, including directly from bank regulators and also from the legislative support agencies including us, the Congressional Research Service (CRS), and the Congressional Budget Office (CBO). These support agencies, in turn, use the information gathered in these banking reports, along with other sources, to do various analyses for Congress. Parties other than federal regulators, such as industry analysts and community organizations, may also use call reports and Y-9 reports (both of which are publicly available) to produce state, regional, and national summaries of the types and overall dollar amounts of loans and deposits held by banks and BHCs. These parties also frequently use home mortgage-related lending reports to assess the availability of credit to various groups within a geographical area, such as a state. Because a state was the largest area within which a bank could expand, information collected at the bank level has been used by such parties to approximate bank loan and deposit activity within a state. To determine what information regulators collect from banks, we reviewed the laws and regulations pertaining to the requirements for banks and BHCs to report data on bank activities (focusing on loans and deposits). These laws and regulations consisted primarily of those authorizing federal bank regulators to conduct examinations, collect financial statement data, collect bank deposit information, and encourage banks to provide credit to the communities in which they operate. In addition, we obtained regulators’ and others’ views about whether interstate branching would pose new or different needs for information. We concentrated our review on information that is currently collected from banks and BHCs. We did not conduct an independent analysis to identify all of the information that regulators and Congress may need to execute their regulatory and oversight responsibilities. To obtain views on the effect that Riegle-Neal is likely to have on the usefulness of reported loan and deposit data, we held discussions with staff members at the Board of Governors of the Federal Reserve System, the Federal Reserve Bank of Dallas, and headquarters and field offices of FDIC and OCC. We also spoke with staff members at CBO and CRS in their roles as users of data for congressional oversight. In addition, we interviewed representatives of several community organizations (the National Community Reinvestment Coalition, the Center for Community Change, and the Association of Community Organizations for Reform Now). We did not attempt to identify all users of reported loan and deposit data. To determine whether there would likely be a material loss of information important to regulatory and congressional oversight of banks, we reviewed call, Y-9, Summary of Deposit, Home Mortgage Disclosure Act (HMDA) loan application register, and required reserve reports collected by the regulators pursuant to laws and regulations. We also reviewed the loan and deposit data the regulators make available to us as the investigative arm of Congress. We then reviewed in greater detail the loan and deposit information that regulators summarized by state, region, and nationwide. We conducted our work between March 1994 and November 1996 in Dallas and Washington, D.C. We provided a draft of this report to the heads of the Federal Reserve, FDIC, and OCC for their review and comment. We also provided the community organizations and other parties we contacted with the opportunity to comment on portions of the draft report that we attributed to them. The comments we received are discussed and evaluated on pages 12 to 14, and the written comments are reprinted in appendixes I and II. Our work was done in accordance with generally accepted government auditing standards. Regulators collect a variety of information about bank loan and deposit activities through reports filed by banks and BHCs. These reporting requirements were not affected by Riegle-Neal. In table 1, we briefly describe the loan reports and the information collected from them. Call reports and Y-9 reports are the primary sources of data that banks and BHCs provide to regulators. Both reports contain a summary of the entity’s loan portfolio categorized by type of loan (e.g., real estate or consumer). The HMDA loan application register and the new CRA report on small business and small farm lending are to collect data, by geographic location, on specific categories of bank loans to assist the regulators in enforcing the federal fair lending laws. Unlike data from call reports and Y-9 reports, data from these reports are collected to assess a bank’s compliance with federal fair lending laws and to assess the bank’s performance in meeting the credit needs of its local community. In addition, the HMDA data are submitted only by banks engaged in originating home mortgage loans; banks that merely purchase loans are not required to submit HMDA data. In table 2, we describe the reports that banks and BHCs use to provide regulators with information about their deposits. Call reports again provide the greatest detail about a bank’s total deposits because they provide a summary of a bank’s total deposits by type (e.g., demand deposits). The Summary of Deposits report provides the most comprehensive information on bank deposits by location, but only provides information on the total bank deposits based on the branch in which the account is located. Additionally, these data are only collected yearly. The Required Reserve report provides more limited information on bank deposits. Although regulators and other interested parties have used call report data to produce state, regional, and national summaries of the types and overall dollar amounts of loans and deposits held by banks, the data reported have always had limitations in their ability to provide information about the geographical location of banking activity. In measuring loan activity, limitations have existed because the data used to compile call reports do not explicitly identify the geographic location of the borrower or the project being funded. As a result, questions exist as to how appropriate it has ever been to assume that the loans held by a bank were made (1) by a banking entity located in the same state in which the bank reporting the loan was chartered or (2) to a party living or doing business in the state where the bank reporting the loan was chartered. These limitations could become more apparent and more widespread once Riegle-Neal is implemented, since the activities reported by banks with interstate operations will clearly include activities in a number of states. According to regulatory officials, loan data reported in the call reports and Y-9 reports do not represent total bank lending in a particular state or region for the following reasons: A significant percentage of a bank’s mortgage loans are sold in secondary markets through such entities as the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac). Banks sometime transfer all, or a portion, of a loan to an affiliated bank or sell loans to unaffiliated banks. Banks make such transfers to diversify portfolios and to ensure compliance with legal lending limits. In addition, some BHCs have their bank subsidiaries transfer all loans of a certain type to one bank to better serve customers and reduce operating expenses. Banks that serve a multistate market (e.g., the metropolitan area of Washington, D.C.) may directly lend to out-of-state customers. Therefore, if a study were trying to determine the amount of loans made by banks to borrowers in a state or region, call report data alone, at least as currently collected and reported, could not answer the question. Researchers interested in studying the geographic distribution of loans noted such limitations before Riegle-Neal was considered. For such studies, data from the HMDA loan application register, and presumably the data to be collected on small business and small farm loans, may be more useful since they provide specific geographic information on borrowers. However, similar geographic information on a bank’s entire loan portfolio is not available from these sources. Call report data on deposits do not identify the location of a bank’s depositors, much as the loan data does not identify the location of a bank’s borrowers. The Required Reserve report serves a specific bank oversight function, as previously described, and is not suited to providing detailed information about the types of deposits or the location of depositors. The most detailed information on the location of depositors is provided by the Summary of Deposits report. This report is the only one that identifies a bank’s deposits by branch. However, regulators pointed out that even the Summary of Deposits report contains inherent limitations regarding the origin of a bank’s deposits. For example, banks may purchase deposits in the national market; in this case, the reporting branch need not reflect either the depositor’s home or business location. Therefore, while a state-by-state analysis of a bank’s Summary of Deposits report identifies where deposits exist, it does not necessarily identify the location of the depositor and, thus, the location from which the funds come. In addition, unlike call report data, which are collected quarterly, the data in the Summary of Deposits report are collected yearly. To the extent that interstate branching becomes prevalent, the usefulness of information reported to bank regulators, which is currently used to compile banking data on a state-by-state basis, would become even more problematic. If BHCs consolidate their operations by merging multistate banking operations or if banks expand across state lines by opening or acquiring branches, call report information would increasingly encompass the loans and deposits of more than one state. Therefore, although the data collected will not change, the geographical information content of the data is likely to become less useful because the data are collected at the bank level rather than the branch level. While the usefulness of data collected at the bank level to provide information for state-by-state measures of banking activity—including monitoring the industry’s geographic concentrations—may be affected by Riegle-Neal, it is unlikely to have a material effect on federal regulation or oversight for three reasons. First, as previously mentioned, the data reported on call reports have always had limitations from the standpoint of imparting geographic information about bank loans. Second, deposit data should continue to be provided at the branch level and, with the limitations noted, should provide some measure of state-by-state banking activity. Third, the most useful and detailed information about bank activities is attained through examinations. Regulators with primary supervisory responsibility still have this tool available, although those who rely solely on off-site information will not. Regulators use the information described in the previously mentioned reports to perform various off-site analyses of banks and BHCs, including (1) financial statements and financial trends, (2) fair lending practices, and (3) market concentrations of deposits. Additionally, bank regulators use call reports and Y-9 reports to assist them in planning, scoping, and conducting safety and soundness examinations or inspections, respectively. Data from these reports provide regulators with financial information about the institutions’ activities and reported financial conditions. Analyses of the data provide insights about the institutions over time and compared with other institutions. To a lesser degree, regulators use the annual independent audit reports in planning safety and soundness examinations for those institutions required to have annual independent audits. Bank regulators are responsible for assessing compliance with various fair lending and consumer protection laws, including the CRA, and they rely, in part, on annual home mortgage-related lending reports to plan, scope, and conduct their compliance and CRA examinations. Likewise, the new small business and small farm loan report is likely to be used in those examinations. The other deposit and reserve reports are not routinely used by regulators in discharging their examination responsibilities, although the related information may be made available to them upon request. These reports are used primarily by FDIC and the Federal Reserve in monitoring institutions’ deposit and reserve activities to assess insurance premiums and to determine that banks are maintaining the proper amount of reserves, respectively. When considering banks’ applications for mergers and acquisitions, bank regulators and the Department of Justice also use the various reports—particularly the Summary of Deposits report and the home mortgage loan report—to assess any antitrust or fair lending implications. With respect to their antitrust review, bank regulators and Justice officials typically look to see if the new banking entity could create an undue concentration of loan or deposit activities in a particular market, which could impede fair and open competition among institutions. Regulatory staff told us that, although the data collected in the various reports are essential to effective off-site monitoring, regulatory actions are rarely, if ever, premised solely upon this information. Off-site information is to be supplemented by on-site examinations or visitations. For example, call reports, which are the most comprehensive and frequently used sources of publicly available information, typically provide regulators with indicators about an institution’s activities and condition. However, the call reports must be supplemented with more detailed and explicit information about the institution’s deposits, lending, and other investment activities. Similarly, the annual home mortgage loan report is used by the bank regulators as an initial indicator of a bank’s performance under the fair lending and CRA laws and regulations, but assessments of the bank’s lending practices involve detailed analyses and generally are supplemented by on-site examinations. Regulators recognize that call reports, as well as the other reports, can only provide indicators of an institution’s activities and must be supplemented through examinations. While bank supervisors use call report data primarily for planning their on-site examinations, FDIC staff members told us that they use these data in their back-up oversight authority. In the past, FDIC staff members have analyzed call report data to identify patterns or trends in industry activity or within geographic areas, particularly those that may indicate a problem that could affect industry stability. Their research is important in identifying historical patterns or trends that can be used to project or anticipate potential bank losses, failures, or crises. FDIC staff members expressed concern that call report data are increasingly becoming less useful for these purposes as consolidation occurs, and they are concerned about further deterioration in the data’s usefulness after Riegle-Neal is implemented. FDIC staff members are considering recommendations to change the call reports to require banks to report their loan and deposit activity by state. Representatives from financial institutions and industry trade groups told us that, on the basis of their past experience, they did not believe that interstate branching would materially affect the usefulness, for regulation or oversight purposes, of lending and deposit information currently collected by federal regulators. Specifically, none of these representatives thought that interstate branching would necessitate that federal bank regulators collect additional data to conduct CRA examinations. They pointed out that Riegle-Neal expands the CRA examination process to require separate state-by-state written evaluations, including a rating, for banks with interstate branches. The act also requires that separate written evaluations, including a rating, be prepared for branches located in multistate metropolitan areas. Finally, officials at the federal bank regulatory agencies stated that section 109 of Riegle-Neal requires their agencies to promulgate uniform regulations by June 1997 that prohibit banks with interstate branch networks from using their out-of-state branches simply to operate as deposit production offices (i.e., as offices that take deposits but do not make loans in their communities). On March 12, 1997, the agencies released for comment a proposal setting forth such regulations. Moreover, at least 1 year after a bank establishes or acquires an interstate branch(es), the appropriate federal banking agency should determine whether the bank is operating the branch(es) as a deposit production office. Representatives from consumer and community organizations did not necessarily believe that a material loss of information would result from interstate banking. However, they stated that to ensure there is no material loss of information necessary to oversee bank activities in an interstate branching environment, banks should be required to submit information on the origin of their loans and deposits. Some representatives suggested that this requirement should take the form of having banks submit call report data for each state in which they operate. In general, the representatives believed that regulators and Congress would better be able to carry out their regulatory and oversight functions if banks were required to submit information on loans by branch as they are required to do for deposits. They also pointed out that such data, by branch, would make it easier for their groups to monitor bank lending activities. As previously noted, many of these organizations had expressed similar concerns about the usefulness of call report data before Riegle-Neal was even considered because information regarding the geographical distribution of loans is one of the groups’ particular concerns. Therefore, the implementation of Riegle-Neal did not give rise to their concern, but does heighten it. We provided a draft of this report to the Chairman of the Board of Governors, Federal Reserve System; the Chairman of the Federal Deposit Insurance Corporation; and the Comptroller of the Currency for their review and comment. We also provided the community organizations we contacted the opportunity to review and comment on a draft of this report. The Federal Reserve and the community organizations did not offer any comments on the draft report. However, the Comptroller of the Currency provided comments in a letter dated February 6, 1997, and the FDIC Chairman commented in a January 27, 1997, letter. The comment letters are reprinted in appendixes I and II. OCC generally agreed with our conclusions, especially given the call report limitations we described. OCC stated that it understood the potential value of more precise geographic information for researching and monitoring regional trends and the relationship between regional economic conditions and bank performance. However, OCC also recognized that reporting is not without its burdens and that proposals to increase reporting requirements must be considered carefully. FDIC expressed some concern with the draft report’s conclusion, but did not disagree that the implementation of Riegle-Neal in and of itself will not cause a material loss of information. FDIC pointed out that for the last decade banks have expanded their lending beyond traditional geographic boundaries and that, to the extent this trend continues, the usefulness of institution-level data will continue to erode. FDIC’s primary concern with our conclusion was that FDIC believes it does not place sufficient emphasis on the effects that interstate branching will have in accelerating this trend and eventually leading to what FDIC considers a material loss of information it uses for statistical and economic studies that assist FDIC in fulfilling its responsibilities. FDIC believes that its need for the geographic data being lost is greatest for large institutions that FDIC insures but does not supervise because these institutions are more likely to have lending exposures outside of their home states. Given FDIC’s unique role and responsibility as deposit insurer, it believes the ongoing loss of geographic data is material to FDIC. In addition, FDIC believes that call report data are the best source of aggregate data, while on-site examinations are less useful for this purpose. However, FDIC acknowledges that, from a cost-benefit perspective, there is a question about what kinds and how much additional data could be justifiably collected—either in call reports or other regulatory reports—that would permit more effective off-site monitoring. FDIC also suggested that our conclusion was in conflict with our report on the bank oversight structure because in that report, we encouraged the use of off-site monitoring to better target and plan on-site examinations. FDIC believes our position in this current report (i.e., the best institution-level information is available through on-site investigation) contradicts our previous position. We understand why FDIC places more emphasis than other regulators on the effect Riegle-Neal may have in eroding the geographic content of call report data, given FDIC’s responsibility to monitor institutions that it does not directly supervise. As deposit insurer, FDIC may have unique research-based information needs that other federal bank regulators do not have. However, while FDIC may need this type of information, we agree with both FDIC and OCC that this need must be balanced against the burdens additional reporting requirements could impose on the industry. Collectively, the bank regulators are in the best position to make such cost-benefit determinations. We do not believe our position in this report contradicts our earlier position on the value of off-site monitoring. Off-site monitoring provides regulators with useful indicators about a bank’s activities and performance that are generally further analyzed through on-site examinations involving the review of more specific information. Regulators are not precluded from requesting information from banks, beyond the information that is reflected in call reports, to enhance their off-site monitoring as well as decisions about on-site examinations. We are sending copies of this report to the Chairman of the Board of Governors of the Federal Reserve System, the Chairman of the Federal Deposit Insurance Corporation, the Comptroller of the Currency, other members of the banking committees, other interested congressional committees, and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction of Mark Gillen, Assistant Director, Financial Institutions and Markets Issues. Major contributors to this report are listed in appendix III. If there are any questions about this report, please contact me at (202) 512-8678. The following is GAO’s comment on the Federal Deposit Insurance Corporation’s letter dated January 27, 1997. Text was added to eliminate confusion about when FDIC must produce regulations and make determinations about deposit production offices. Jeanne Barger, Issue Area Manager John V. Kelly, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed how the interstate branching provisions of the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994 are likely to affect the usefulness of the deposit and loan data collected and reported to federal regulators by the banking industry under statutory and regulatory requirements, focusing on whether modifications to such data requirements would help to ensure that the implementation of the act's interstate branching provisions does not result in material loss of information important to regulatory and congressional oversight of banks. GAO noted that: (1) to the extent that interstate branching becomes prevalent, call report data, as currently collected and reported, will become less useful for approximating bank loan and deposit activity within a state; (2) as bank holding companies (BHC) consolidate by merging multistate banking operations and as banks expand across state lines by opening or acquiring branches, call report information reported at the bank level will increasingly encompass the loans and deposits from more than one state; (3) however, accurately measuring loan and deposit activity by state was subject to limitations even before Riegle-Neal; (4) BHCs had already begun establishing interstate operations and creating regional booking centers for some of their activities and national markets have developed for certain bank products; (5) compared with the information that existed before it was enacted, the implementation of Riegle-Neal is unlikely to result in a material loss of information necessary to perform regulatory and congressional oversight for three reasons; (6) first, as previously mentioned, the usefulness of call report data to approximate bank loan or deposit activities within a state was already somewhat limited and has become increasingly so, but only in part due to Riegle-Neal; (7) second, sources of information collected at the branch level or by geographic location should not be affected by interstate branching; (8) for example, summary of deposits data should still be available to measure deposit activities that are booked in a particular state, although these data will not provide information on the geographic source of those deposits; (9) also, home mortgage loan data should be available as an indicator of mortgage loan activity in a geographic area; (10) finally, the most useful and detailed information about bank activities is attained through examinations; (11) regulators with primary supervisory responsibility still have this tool available, although those who rely solely on off-site information will not; and (12) for these reasons, at this time, there does not appear to be sufficient need to modify regulatory or statutory reporting requirements. |
Colombia is the world’s leading producer and distributor of cocaine and a major source of heroin consumed in the United States. For the past two decades, the United States has supported Colombia’s efforts to reduce drug-trafficking activities and to stem the flow of illegal drugs to the United States. Various U.S. agencies, including the Departments of State and Defense and the Drug Enforcement Administration (DEA), are responsible for programs through which counternarcotics assistance is provided to Colombian police and military units. From fiscal year 1990 through fiscal year 1997, the United States provided or planned to provide these units with assistance worth approximately $731 million. The United States has supported counternarcotics activities in Colombia since the 1970s. Recently, the United States established three major counternarcotics objectives to increase Colombia’s political will and capabilities to (1) destroy major drug-trafficking organizations, (2) reduce the availability of drugs through the eradication of illicit drug crops and enforcement efforts, and (3) strengthen Colombian institutions to enable them to support a full range of interdiction activities. These objectives support the international goals in the U.S. national drug-control strategy. The U.S. Embassy program plan for fiscal years 1996 and 1997 focused on the eradication of illicit drug crops, the interdiction of narcotics and precursor chemicals, justice sector reform, the reduction of money-laundering activities and the seizure of drug-related assets, drug awareness and drug use reduction within Colombia, and infrastructure development. Data provided by the Departments of State and Defense indicated that during fiscal years 1990-97 the United States provided Colombia approximately $731 million in counternarcotics assistance to support Colombia’s eradication and interdiction efforts. As shown in table 1.1, various sources of funding were used to program this assistance. Table 1.2 presents the funding for these programs. The Office of National Drug Control Policy is responsible for developing the President’s national drug control strategy and coordinating the funding of the federal agencies that implement programs to support the strategy. In Colombia, the primary federal agencies involved in counternarcotics programs are the Departments of State and Defense and DEA. Other agencies that implement portions of the U.S. national drug control strategy in Colombia are the U.S. Agency for International Development and various U.S. intelligence agencies. In the State Department, the Assistant Secretary for International Narcotics and Law Enforcement Affairs is responsible for formulating and implementing the international narcotics control policy, coordinating the narcotics control activities of all U.S. agencies overseas, and overseeing the INC Program. In 1996, the State Department incorporated counternarcotics assistance that had been provided from other sources, such as the FMF Program and economic development assistance provided by the U.S. Agency for International Development, into the INC Program. The State Department’s Bureau of International Narcotics and Law Enforcement Affairs manages an air wing program through which it provides funds to support eradication and interdiction operations in several countries, including Colombia. The Bureau has contracted with Dyncorp to provide logistical, operational and training support for these operations. The Narcotics Affairs Section at the U.S. Embassy manages the INC Program. The Section provides equipment and training, operational support, and technical assistance and coordinates with Colombian agencies involved in counternarcotics activities. Congress appropriated $230 million for State’s worldwide INC program for fiscal year 1998. Of that amount, an estimated $30 million will be used to support counternarcotics activities in Colombia, and an estimated $50 million will be used to provide the Colombian police with new and upgraded helicopters. In the Defense Department, the Coordinator for Drug Enforcement Policy and Support and the Director of the Defense Security Assistance Agency are primarily responsible for planning and providing equipment and training to Colombia’s military and law enforcement agencies. The U.S. Southern Command is the Defense Department’s principal liaison with Colombia for coordinating the administration of U.S. counternarcotics aid. In Colombia, the Department’s aid is primarily managed by the Embassy’s U.S. Military Group. The Group’s responsibilities include coordinating security assistance programs with the Colombian military and other U.S. agencies involved in counternarcotics operations and monitoring assistance provided to Colombian military units to ensure that it is being used for counternarcotics purposes. DEA is the principal federal agency responsible for coordinating drug enforcement intelligence overseas and conducting all drug enforcement operations. DEA’s objectives are to reduce the flow of drugs into the United States through bilateral criminal investigations; collect intelligence on organizations involved in drug-trafficking; and support worldwide narcotics investigations covering such areas as money-laundering, control of chemicals used in the production of illicit narcotics, and other financial operations related to illegal drug activities. DEA also provides training to Colombian law enforcement personnel through State’s INC Program. Most U.S. counternarcotics assistance has been used to assist both Colombian National Police units and various military units involved in operations to interdict drugs and eradicate drug crops and to support other Colombian governmental entities that implement money-laundering and asset forfeiture laws and investigate drug-trafficking organizations. The Colombian National Police is the primary organization responsible for interdiction and eradication operations, primarily through its Directorate for Anti-Narcotics. The Colombian armed forces also support counternarcotics activities, primarily in support of police counternarcotics operations. The counternarcotics certification process, as mandated by section 490 of the Foreign Assistance Act, has been a legal requirement since 1986.Congress created the process out of concern that the executive branch was not being tough enough in eliciting cooperation in the antinarcotics effort from countries that were either the main source of illicit drugs or through which drugs transited to the United States. A primary intent of the certification process is to strengthen the political will of a country to combat illegal drug-trafficking activities. Section 490 requires the President to certify by March 1 of each year which major drug-producing and transit countries cooperated fully with the United States or took adequate steps on their own to achieve full compliance with the goals and objectives established by the 1988 United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances during the previous year. If a country has not met the statutory objectives, the President can either deny certification and impose sanctions or grant a vital national interests certification, which recognizes that the requirement to use sanctions against a noncooperating country would threaten the vital national interests of the United States. Since the certification process was first established in 1986, the number of nations subject to the certification process has ranged from 24 to 31. The number of nations denied certification has ranged from 3 to 6 annually, and the number of nations granted a vital national interest certification has ranged from 1 to 6. In every year prior to 1995, Colombia received full certification. However, Colombia was granted a vital national interests certification in 1995 and was denied certification in 1996 and 1997. The March 1, 1996, decision to deny certification to Colombia was unique in that it was the first time certification was denied to a major U.S. counternarcotics partner and aid recipient. Mandated sanctions imposed against countries denied certification include the termination of most forms of foreign assistance, and the United States is required to vote against multilateral development bank loans to that country. Types of aid affected include sales and financing under the Arms Export Control Act; nonfood assistance under Public Law 480; financing by the U.S. Export-Import Bank; and most other foreign assistance, with the exception of counternarcotics assistance provided through State’s INC Program and humanitarian aid. The President is also authorized to invoke discretionary trade sanctions against decertified countries. These sanctions include the removal of trade preferences under the Andean Trade Preferences Act and the Generalized System of Preferences, the suspension of sugar quotas, tariff penalties, and the curtailment of transportation arrangements. To determine whether a country is fully cooperating with the United States, the State Department establishes goals that it expects each country to address in meeting U.S. counternarcotics objectives. The goals are provided to each affected country through diplomatic exchanges in either late spring or early summer of the year preceding the certification decision. In 1995 and 1996, the United States provided Colombia with goals it used in assessing whether Colombia would be certified on March 1, 1996, and again on February 28, 1997. In both instances, the President determined that Colombia was not fully cooperating with the United States in meeting U.S. counternarcotics drug objectives. The President reached his decisions to decertify Colombia primarily because of the following factors: Throughout the Colombian government, corruption undermined the counternarcotics efforts of law enforcement and judicial officials. Colombia did not take sufficient steps to strengthen its prison security. As a result, captured cartel members continued to manage their activities from prison. The Colombian government took no legislative steps to safeguard the confidentiality of U.S.-provided investigative information. The Colombian government had not passed legislation to implement the extradition treaty it signed with the United States in 1979 and did not respond to the U.S. request for the extradition of four major drug traffickers. The Colombian government could not reach agreement with the United States on testing and using a more effective granular herbicides to eradicate coca. Over the past 10 years, we have reported on various elements of the U.S. counterdrug effort in Colombia. For example, we reported on major obstacles within Colombia that hinder U.S. antidrug programs. These obstacles include the limited ability of some Colombian agencies to plan and implement an effective counternarcotics strategy, the increasing insurgency and narcoterrorism activities that limit Colombia’s ability to maintain a presence in some narcotics-producing and -processing areas of the country, the expansion of drug cartel operations into the production and distribution of heroin, and widespread corruption in the Colombian government. We also reported on various U.S. management problems that hindered the implementation of effective counternarcotics programs in Colombia. Specifically, we reported that U.S. officials lacked data needed to evaluate program effectiveness in Colombia. Further, we reported that the Departments of State and Defense were not coordinating their efforts with each other and did not have complete oversight over U.S. counternarcotics programs because they had not developed an adequate end-use monitoring system to ensure that U.S.-provided counternarcotics assistance was being used as intended. Finally, we reported that even though U.S. legislation prohibits counternarcotics aid from being provided to Colombian units engaged in human rights abuses, it was difficult to implement because the United States had not established procedures to make such a determination. The Chairmen, Subcommittee on National Security, International Affairs, and Criminal Justice of the House Committee on Government Reform and Oversight; the House Committee on International Relations; and the Senate Caucus on International Narcotics Control asked us to review the efforts of U.S. and Colombian agencies, principally the Colombian police and military, to conduct counternarcotics programs in Colombia. Specifically, we examined (1) the nature of the drug-trafficking threat; (2) the political, economic, and operational impact of counternarcotics activities in Colombia since the initial U.S. decertification decision; and (3) U.S. efforts to plan and manage counternarcotics activities in Colombia. To address the threat issue, we received briefings from U.S. law enforcement, intelligence, and military officials and reviewed documentation in Washington, D.C., and at the U.S. Embassy in Colombia. To address the impact of the 1996 and 1997 decertification decisions on Colombia and U.S. efforts to plan and manage counternarcotics activities, we visited various agencies in Washington, D.C.; Panama; and Colombia. In Washington, D.C., we interviewed officials and reviewed planning, implementation, and other related documents at the Office of National Drug Control Policy, the Departments of State and Defense, DEA, and other federal agencies. In Panama, we interviewed U.S. officials at the U.S. Southern Command and reviewed documents related to counternarcotics activities in Colombia. In Colombia, we interviewed Embassy officials, including the Ambassador, and analyzed reports and other documents from various U.S. agencies that were responsible for implementing counternarcotics programs in Colombia. While in Colombia, we also interviewed Colombian military, police, and civilian officials to obtain their views on the issues discussed in this report. We analyzed Colombian police reports and other documents to determine operational readiness. We also analyzed information provided by the U.S. Embassy and the State Department pertaining to all counternarcotics operations during 1996 and 1997. We did not validate any of the data found in the Embassy’s reporting on the economic impacts of decertification. Our review was conducted between March and December 1997 in accordance with generally accepted government auditing standards. Despite U.S. and Colombian efforts to reduce drug-trafficking activities, Colombian drug-trafficking organizations remain the center of the cocaine trade and are becoming increasingly active in the heroin trade in the United States. Furthermore, Colombian insurgent groups are becoming more actively involved in drug-trafficking activities and are becoming more powerful, making it more difficult for Colombian police and military forces to reduce these activities within their borders. Three-quarters of the world’s cocaine is produced in Colombia. In addition, U.S. law enforcement agencies believe that Colombian drug-trafficking organizations are becoming increasingly active in the heroin trade. According to U.S. government estimates, Colombia produces 6.5 metric tons of heroin per year and about 600 to 700 metric tons of cocaine. Although Colombia was historically the world’s third largest cultivator of coca leaf, behind Peru and Bolivia, it recently surpassed Bolivia as the number two cultivator, with an estimated 67,200 hectares of coca under cultivation in 1996. In March 1997, the State Department reported that Colombian coca cultivation had increased by about 50 percent since 1994. U.S. officials attributed this increase to the successful reduction of about 50 percent of the known air-related drug-trafficking activities between Colombia and Peru between 1992 and 1996. According to U.S. reports, this reduction led to a glut of cocaine base in Peru, which in turn led to a plunge in the price of cocaine base and a subsequent reduction in coca cultivation. In March 1997, the State Department reported that coca cultivation in Peru declined by 18 percent between 1995 and 1996. According to U.S. officials, drug-trafficking organizations thus began to increase coca cultivation in Colombia to ensure a constant supply of coca leaf. The Colombian government has disrupted the activities of two major drug-trafficking organizations, the Medellin and Cali cartels, by either capturing or killing their key leaders. However, this disruption has not reduced drug-trafficking activities. For example, in June 1996, a U.S. law enforcement agency reported that a new generation of relatively young drug traffickers was emerging in the North Coast, Northern Valle del Cauca, and newer Cali cartels. In July 1997, a U.S. interagency report stated that hundreds of Colombian criminal organizations are engaged in cocaine-trafficking. U.S. and Colombian efforts to reduce drug-trafficking activities are made more difficult by the ability of illicit drug organizations to change their trafficking routes and methods of operations. Various U.S. officials stated that since drug-trafficking activities by air between Colombian and Peru were reduced, more activity is occurring on Colombian rivers. However, there is no accurate information on the extent to which this is happening. Colombia’s ability to reduce coca cultivation and related drug-trafficking is complicated by the presence of active insurgent groups and their involvement in drug-trafficking activities throughout a large portion of the country. The most active insurgent groups are the Colombian Revolutionary Armed Forces (FARC) and the National Liberation Army (ELN). These two groups, with an estimated strength of about 10,000 to 15,000 personnel, have increasingly hindered the Colombian government’s counternarcotics efforts. In 1993, we reported that both these groups were involved in drug-related activities and that they controlled or influenced large sections of Colombia, particularly in the sparsely populated south. Since that time, our discussions with U.S. and Colombian officials and review of reports indicated that the groups are more actively involved in drug-related activities and are gaining more control throughout Colombia. Figure 2.1 shows the major coca-producing and opium poppy-producing areas and the locations of the two insurgent groups most actively involved in drug-trafficking activities in Colombia during 1996. According to U.S. officials, insurgent groups are in virtually all regions where traffickers operate and have become more actively involved in drug-related activities since the termination of Soviet and Cuban financial support after the Soviet Union collapsed. In February 1997, a U.S. interagency assessment of the role of the insurgents in drug-related activities concluded that the insurgents, primarily the FARC, were diversifying their involvement in several ways. For example, insurgents provide security to assist traffickers in processing and transporting narcotics in exchange for money and weapons. Furthermore, a few insurgent groups are involved in localized, small-scale drug cultivation and processing. In March 1997, the Commander in Chief of the U.S. Southern Command testified before the Senate Armed Forces Committee that the FARC’s narcotics-related income for 1995 reportedly totaled $647 million. According to U.S. officials, the task of conducting counternarcotics operations is made more difficult for the Colombian police and military because of the increasing strength of the insurgent groups. These officials stated that the insurgents are strengthening their control in certain sections of Colombia. An October 1997 Defense Department analysis concluded that the groups are becoming more sophisticated and pose a greater threat to the Colombian military. In addition, narcotics-related violence in Colombia has traditionally been extensive. For example, in February 1997, the Director of the Colombian National Police and the General Commander of the Military Forces of Colombia testified before the House of Representatives that 366 Colombian police and military personnel were killed in 1996 and that since 1980 more than 3,000 Colombian policemen have died. The narcotics threat from Colombia continues and may be expanding. Drug-trafficking activities by Colombian organizations continue and Colombian insurgent groups are becoming more actively involved in supporting the drug-trafficking activities of these organizations. Coca cultivation has increased significantly in recent years, and Colombian heroin is becoming more prevalent in the United States. The continuing narcotics threat presents significant challenges to U.S. and Colombian counternarcotics agencies. Since the initial decision to decertify Colombia in 1996, the State Department has reported that Colombia has made some progress in strengthening its political will to work with the United States to achieve U.S. counternarcotics objectives. To show its commitment, Colombia passed antidrug legislation, signed a maritime agreement to help coordinate the apprehension of drug traffickers, and continued to conduct counternarcotics operations. However, State Department officials believe that Colombia must do more to fully cooperate with U.S. counternarcotics efforts. Decertification had little effect on Colombia’s economy, as the President chose not to apply discretionary sanctions against the country. However, economic sanctions mandated by decertification may have adversely affected U.S. investment and business activity in Colombia. When Colombia was initially decertified, the State Department was unprepared to determine whether some programmed assistance intended for the Colombian police and military could be provided. It took State, in conjunction with other executive branch agencies, about 8 months to decide what could be provided. As a consequence, approximately $35 million in U.S. counternarcotics assistance was canceled or delayed. The overall implications of the assistance delays are unclear. Colombia has generally been able to mitigate the loss of assistance by using alternative funding sources to purchase needed equipment, some of which was provided by the Departments of State and Defense. Colombian police officials indicated that some operations could not be conducted because certain types of assistance were not available. As we noted earlier, Colombia was decertified in 1996 and 1997, primarily because political corruption within the government of Colombia had undermined the counternarcotics efforts of Colombia’s law enforcement and judicial officials. Despite the fact that Colombia was decertified, U.S. officials believe that Colombia has made some progress in meeting various U.S. counternarcotics objectives. Examples of Colombia’s positive actions follow: The government passed various laws to assist counternarcotics activities, including money-laundering and asset forfeiture laws in February 1997, and reinstated the extradition of Colombian nationals to the United States in November 1997. The government signed a maritime agreement, along with a bilateral ship-boarding agreement, with the United States in February 1997 that provides for coordinated pursuit and apprehension of suspected drug traffickers. Colombian law enforcement efforts resulted the capture or surrender of six of the seven top echelon members of the Cali cartel in 1995 and 1996, the reduction in the use of San Andres Island as a way station for drug shipments, and the pursuit of an ambitious crop eradication program in 1997 by the Colombian police. State Department officials believe that Colombia must take additional actions to show its commitment to U.S. counternarcotics efforts. In March 1997, the State Department informed the Colombian government that to achieve certification from the United States, it would have to take the following actions: Test and apply a more effective, safe, reasonably priced granular herbicide. Amend legislation to allow for the unconditional extradition of Colombian nationals involved in illegal narcotics activities. Fully and effectively implement newly passed laws on asset forfeiture, money-laundering, and sentencing, Take all appropriate steps to stop drug traffickers from directing their organizations from prison. Make every effort to support investigations and prosecutions to ensure that corrupt officials are brought to justice. According to State Department officials, these factors will be considered as part of the next certification determination in early 1998. The Foreign Assistance Act of 1961 and the Narcotics Control Trade Act of 1974 require certain mandatory sanctions and allow discretionary sanctions when the President denies certification to a drug-producing or transit nation. Mandatory sanctions include suspension of U.S. economic assistance, such as Export-Import Bank and Overseas Private Investment Corporation financing, and voting against multilateral economic assistance from organizations such as World Bank and the Inter-American Development Bank. Discretionary sanctions include the removal of trade preferences under the Andean Trade Preferences Act and the Generalized System of Preferences, the suspension of sugar quotas, tariff penalties, and curtailment of air transportation arrangements. The President chose not to apply discretionary sanctions against Colombia in either 1996 or 1997. On the other hand, the mandatory sanctions under decertification may have hurt U.S. businesses in Colombia. State and U.S. Embassy officials said they did not conduct a detailed analysis of the impact that economic sanctions would have had on Colombia. However, they pointed out that in 1996 and 1997, they did not recommend imposing discretionary sanctions because the United States did not want to hurt sectors of the Colombian economy that were pressuring the Colombian government to strengthen its counternarcotics laws. On the other hand, the State Department reported that the 1996 decertification decision required the Overseas Private Investment Corporation and the Export-Import Bank to freeze about $1.5 billion in investment credits and loans, including about $280 million for a U.S. company to invest in Colombia’s oil industry. State also reported that a 1996 survey by the Council of American Enterprises, an American business consortium in Colombia, concluded that because its U.S. member companies could not receive financing from the Export-Import Bank under decertification, they lost $875 million in potential sales, mostly to Asian and European competitors. According to a State Department official, State did not validate the information in the Council’s survey. According to the Foreign Commercial Service Attache at the U.S. Embassy, however, anecdotal information indicated that Colombian businesses were considering the development of joint ventures with European and Asian companies because decertification sanctions had made the U.S. business environment uncertain. After the decertification decision on March 1, 1996, the Departments of State and Defense canceled or delayed about $35 million worth of assistance to the counternarcotics units of the Colombian police and military. The impact of this delay and the cancellation of assistance on Colombian counternarcotics operations is unclear. Our review of U.S. and Colombian records and discussions with various U.S. and Colombian officials indicated that most counternarcotics operations were maintained despite the aid cutoff. However, the Colombian police and military were generally able to mitigate the loss of U.S. grant assistance by relying on other resources, but at greater cost. According to Colombian officials, they could have conducted more counternarcotics operations if the assistance had not been delayed. Table 3.1 shows the source and amount of the $35 million worth of assistance and training that was either canceled or delayed because of the March 1996 decertification decision. According to Defense Department officials, up to $30 million in FMF grant aid that was delayed included items such as spare parts for vehicles, fixed-wing aircraft, and helicopters; explosives and ammunition; publications; and individual clothing items. The canceled training for Colombian police and military officials in U.S. schools was in a variety of areas, including human rights. On August 16, 1997, the President issued a national security waiver, allowing the $30 million in grant aid and $600,000 in IMET assistance to be released to Colombian police and military units. Without this waiver, the assistance could not have been provided because of the decertification requirement. The assistance delayed by State until November 1996 (about $2.5 million in FMS and $1.1 million in its own funds) included items such as aviation spare parts, vehicles, ammunition and funding to repair a Colombian police DC-3 aircraft and to provide a flight simulator to train police pilots. None of this assistance would have been delayed had State been adequately prepared to judge whether the aid could have continued to be provided on March 1, 1996. As early as February 1995, State’s Bureau of International Narcotics and Law Enforcement Affairs had prepared a preliminary analysis of the types of assistance that could continue to be provided in the event a country was decertified. State officials cited interagency legal concerns as well as differences within the State Department regarding policy concerns to explain why it took them 8 months to issue their final guidance on the types of assistance that could be provided. According to U.S. Embassy and Colombian officials, the delays in U.S. assistance prevented the Colombian police and military units from receiving some training funded under the IMET Program and conducting some planned counternarcotics efforts. However, these officials indicated that most counternarcotics operations were not drastically affected because alternative funding sources were available. They did note that some equipment was more heavily used than it would have been had the assistance been available. According to U.S. and Colombian officials, the cancellation of U.S. military assistance and training hurt efforts to improve military-to-military relationships and to provide human rights training to Colombian military officers. They also indicated that the delay in other assistance had some impact on the ability of the Colombian police and military units to expand counternarcotics operations. For example, Colombia’s Chief of the Counternarcotics Police stated that because of the cutoff of U.S. assistance, certain types of explosives could not be provided. As a result, he stated, the police could destroy only 80 of the 210 airstrips they had planned to destroy in 1996. The impacts from assistance delays were minimized because the Departments of State and Defense and the Colombian Ministry of Defense used other funding and procurement sources to continue to provide critical logistics, spare parts, and training to Colombia’s counternarcotics forces to sustain operations. Both the Colombian police and military purchased spare parts directly from various commercial sources and used their existing inventories to maintain their operational readiness rates. These parts, however, were substantially more expensive than they would have been had U.S. grant assistance been available. For example, the Colombian police reported that certain helicopter parts bought commercially cost 150 percent more than parts purchased through U.S. military assistance channels. In addition, the U.S. Embassy used some State INC funds and some Defense Department funds to support Colombian police and military eradication and interdiction efforts. As a result of these actions, the Colombian police reported improvements in operational rates for their helicopters. Police reports showed that helicopter operational rates for counternarcotics operations were about 60 percent for the 2 months prior to the March 1, 1996, decertification decision. However, from March 1996 to June 1997, the police helicopter operational rates were consistently in the 70-percent range. In addition, according to an Embassy report, the Colombian air force increased the number of flying hours dedicated to counternarcotics activities from 10,182 in 1995 to 10,605 in 1996. State Department data also indicates that Colombian police units were able to continue conducting counternarcotics operations. For example, in March 1997, the State Department reported that the number of cocaine laboratories destroyed by the Colombian law enforcement agencies more than doubled, from 396 in 1995 to 861 in 1996. During this same period, the amount of cocaine seized increased from about 22 metric tons to 24 metric tons. The U.S. Embassy reported that during 1997 the Colombian national police seized about 31 metric tons of cocaine. U.S. and Colombian officials stated that they were forced to rely much more heavily on available equipment. For example, one of two DC-3 aircraft, cargo aircraft essential for supporting eradication and interdiction operations of the Colombian police, was grounded by a serious accident in August 1995. About 4 months after the accident occurred, the Colombian police forwarded a request to the Narcotics Affairs Section to repair it. The U.S. Embassy approved the request 2 days later. As a result of the March 1996 decertification decision, the State Department could not decide if it could provide funds to repair the aircraft. In July 1996, the U.S. Embassy made the decision to repair the aircraft commercially, and repairs were completed by the contractor in December 1996. A primary reason for the delay in repairing the aircraft was that State Department officials were unable to determine whether the repairs were allowed under decertification. Because this aircraft was not available during this time, the U.S. Embassy reported that the police had to double the use of the remaining DC-3 to conduct operations. Even so, it was impossible for the Colombian counternarcotics police to make up for the lost time, according to Narcotics Affairs Section officials. Furthermore, according to the Section, the counternarcotics police could not conduct additional activities because of the inoperative DC-3. At the time of our visit in May 1997, U.S. Embassy officials said that while the use of commercial and other sources had enabled the Colombian police and military to continue counternarcotics operations after the interruption in U.S. assistance, the higher expenditures for items and the effectiveness of operations could not be sustained over a long period. In May 1997, the U.S. Embassy reported that the government of Colombia, as part of an overall budget reduction decision, had reduced the 1997 budget for the Ministry of Defense by 30 percent less than the 1996 budget. U.S. and Colombian officials stated that operations could not continue at their current rate unless the United States provides additional assistance in the future. Part of the sustainment concerns may have been addressed when the President issued a national security waiver in August 1997 that allowed the release of counternarcotics assistance. Defense officials stated that this assistance is now available for use by the Colombian air force and navy. However, according to State officials, equipment for the Colombian army is pending because the Colombian army has not complied with the agreement it signed regarding the use of the assistance. As we noted earlier, for the past 10 years we have reported on a number of management problems associated with U.S. counternarcotics activities in Colombia. Some of these problems continue. For the past 2 years, U.S. counternarcotics activities in Colombia have been hampered because the State Department did not effectively plan and manage funding and assistance to support the numerous and varied U.S. counternarcotics objectives in Colombia. State and the U.S. Embassy could not fully support planned counternarcotics activities because they were not well prepared for the consequences of expanding the coca aerial eradication program. Funding used to support the aerial eradication effort came at the expense of other counternarcotics activities. Moreover, State did not take steps to ensure that equipment included in a $40 million assistance package from Defense Department inventories was consistent with the priority needs of the counternarcotics forces of the Colombian police and military or with the Embassy’s counternarcotics plan. As a result, the assistance package included equipment that may be of limited benefit to the Colombian police and military and will require additional funding not budgeted for in Embassy plans. Moreover, the military assistance was delayed for 10 months because State and the Embassy could not reach agreement with the government of Colombia over acceptable end-use provisions to ensure that the assistance was not being provided to units suspected of human rights violations. Beginning in October 1996, the State Department, through its Bureau of International Narcotics and Law Enforcement Affairs, decided to significantly increase the U.S. level of support and participation in Colombia’s aerial eradication operations against coca and opium poppy. According to State officials, the decision was made primarily to improve upon the results achieved in 1995 and 1996 in eradicating coca and opium poppy. However, State had not developed an operational plan and had not fully coordinated with the Narcotics Affairs Section in the Embassy to implement the program increase. As a result, the Section was unprepared for the escalation in costs to support this effort and was unable to fully support other programs meant to achieve the Embassy’s counternarcotics objectives. In addition, other components at the Embassy, including DEA and Defense representatives, complained that the State Department was not adequately supporting their activities to help meet the Embassy’s counternarcotics objectives. During fiscal year 1997, State increased the number of aircraft and U.S. contractor personnel involved in the aerial eradication program. As of July 1997, 112 contractor personnel—9 management and administrative staff, 56 pilots and operations staff, and 47 maintenance staff—were in Colombia. The contractor personnel’s role also changed from being primarily responsible for training Colombian pilots and mechanics to directly maintaining aircraft and actively participating in planning and conducting eradication operations. The State Department estimates that the direct costs of supporting the contractor in Colombia increased from about $6.6 million in fiscal year 1996 to $14 million in fiscal year 1997. Throughout fiscal year 1997, the Embassy’s Narcotics Affairs Section continually adjusted its estimates of the amount of funding needed to support eradication efforts, from about $19 million in the beginning of the fiscal year to $34 million by July 1997. According to various Embassy reports, these changes were caused by unforeseen costs incurred as the result of the State Department’s decision to increase support for aerial eradication. For example, in April 1997, the Narcotics Affairs Section reported that it would require an additional $1.4 million for unanticipated costs associated with providing adequate security for contractor personnel at several remote eradication locations and that this estimate did not include costs associated with other major sites used to conduct eradication missions. According to the Section Director, an additional $3 million to $4 million to upgrade security at these locations was not included in the Embassy’s program budget. Furthermore, the Section reported that it had to reallocate $11 million from other projects to support various aspects of the increased eradication efforts. As a result, by July 1997, the Section reported to State that it could not fully support activities such as interdiction efforts, demand reduction, and other efforts designed to strengthen the law enforcement institutions of Colombia. The Section also reported its concerns about adequate funding for these activities in fiscal years 1998 and 1999. Various components at the U.S. Embassy also raised concerns about State’s emphasis on eradication at the expense of their programs. For example, U.S. military personnel in Colombia stated that the State Department’s emphasis on eradication hurt their efforts to support the Colombian armed forces’ ability to conduct their own counternarcotics operations and to provide ground and air support to the Colombian police when they are conducting eradication or interdiction missions, particularly in areas where insurgent groups are active. DEA officials indicated that State’s focus on coca eradication displaced support for opium poppy eradication and other drug-related law enforcement activities. DEA officials also stated that the proposed coca eradication program failed to respond to key elements of the U.S. counternarcotics objectives for Colombia. State officials agreed that the Embassy was not well prepared to manage the escalation of costs associated with the increase in support for aerial eradication. However, they pointed out that the spray program has been successful, with about 42,000 hectares of coca sprayed by the end of December 1997. They indicated that State may not have adequate funding support for all its programs in Colombia in fiscal year 1998 because State will spend about $50 million to help the Colombians purchase three new Blackhawk helicopters and upgrade UH-1H helicopters, in addition to the regular INC Program. State officials told us the fiscal year 1998 INC Program for Colombia is currently under review. On September 30, 1996, the President, under section 506(a)(2) of the Foreign Assistance Act, announced that he intended to provide Colombia with about $40 million in counternarcotics assistance from Defense Department inventories. This action was justified on the basis that important programs would grind to a halt without the aid and that past investments in counternarcotics programs would suffer due to the deterioration of equipment, training skills, and goodwill on the part of those Colombians who daily put their lives at risk. According to officials from the Departments of State and Defense and the U.S. Embassy, key elements of the 1996 assistance package were hastily developed, and some of the equipment in the package was not the best suited to meet the priority needs of the Colombian counternarcotics forces. In addition, support requirements were not fully assessed. Defense Department and Embassy officials have expressed concern that using this type of assistance without other sources of funds for the additional support costs may not be the best method for meeting critical counternarcotics needs of the Colombian police and military units. According to Defense Department officials, an assistance package should be developed with extensive input from the Departments of State and Defense. This input includes information about specific requirements of the recipient country, the ability of the recipient country to operate and maintain the equipment provided, and the ability of the U.S. military to meet its own needs without the equipment included in the assistance package. Beginning in July 1996, the State Department, in conjunction with the U.S. Embassy, began developing an initial list of equipment needed by the Colombian police for inclusion in a possible section 506(a)(2) drawdown assistance package. The U.S. Embassy prepared an initial list of equipment for the Colombian police on July 29, 1996. Because this list did not contain equipment for the Colombian military, the U.S. Embassy had to prepare an expanded listing to include all counternarcotics equipment for both the Colombian police and military. This list was sent to the Departments of State and Defense on August 15, 1996. Defense Department and Embassy officials stated that even though this expanded list was developed, they were given insufficient time to assess the requirements for the Colombian police and military and to identify the costs associated with operating and maintaining the equipment. Furthermore, Defense Department officials stated that they were given less than 2 weeks to conduct an analysis on the availability of the equipment on the expanded list or the impacts that withdrawing the equipment from defense inventories would have on the readiness of U.S. forces. Finally, U.S. officials stated that some items, such as the C-26 aircraft, were added by the National Security Council only 3 days before the list was provided to the President for his approval. Table 4.1 summarizes the type of counternarcotics assistance provided and planned for delivery to the Colombian police and military forces. Limited planning and coordination of the package resulted in the delivery of some assistance that did not meet the most pressing counternarcotics needs of the Colombian police and military and added substantial unanticipated support costs to operate and maintain, as illustrated in the following four examples. The U.S. Embassy identified a requirement to provide Colombian police and military units with an aircraft capable of performing surveillance missions. According to officials from the Departments of State and Defense, the National Security Council decided to address this requirement by including five C-26 aircraft (two C-26 for the police and three C-26 for the military) in the assistance package. U.S. Embassy and Colombian officials stated these aircraft would not meet the surveillance needs of the Colombian police and military as currently configured. According to Department of State and Defense officials, no decision has been made on how many of the aircraft will have to be modified to perform the surveillance mission, but modifying each aircraft selected for this mission will cost at least $3 million. According to U.S. Embassy officials, the C-26 was not included in any requirement plan for either the Colombian police or military, and other types of aircraft would have been more desirable. In addition, both State and Defense have estimated that operating and maintaining the aircraft will cost the Colombian police and military at least $1 million annually. The State Department has agreed to provide up to $1 million to support the two C-26 aircraft assigned to the Colombian police. However, State has no plans to provide support for the three C-26 aircraft assigned to the Colombian military. Both U.S. and Colombian military officials stated that it would be more expensive to maintain the logistics capability for such aircraft because of the small number that have been provided. They indicated that U.S. assistance should, to the maximum extent possible, provide equipment to minimize expensive logistics. The 12 UH-1H helicopters in the assistance package were delivered to the Colombian police in May 1997 with an average of less than 10 hours of flying time available before substantial maintenance would have to be done on them to meet performance standards. Two months after the Colombian police received the helicopters, the Narcotics Affairs Section reported that according to the Colombian police, only 2 of the 12 helicopters were operational and that unless sufficient funds were provided to meet maintenance requirements, parts would be removed from these helicopters to maintain its existing fleet of 38 helicopters. The Section estimated the cost of the repairs at about $1.2 million. In August 1997, the State Department said that it would provide additional assistance to make the helicopters operational. The package listed a utility landing craft valued at $1.5 million to support Colombian military operations against the transport of narcotics on rivers. Defense Department officials stated that if they had been consulted earlier, the craft would not have been listed in the package because not enough of the craft were available for U.S. Army units. According to U.S. military personnel, a smaller vessel with more limited range was offered to Colombia as an alternative, but the Colombian military said the boat did not meet its needs. The six patrol craft included in the assistance package may be of limited utility to the Colombian navy. The Defense Department reported that the craft had been taken out of service in 1993 and required an estimated $600,000 in maintenance to make them operational. However, the U.S. Navy cautioned that even if the necessary repairs were made, the craft might be of marginal utility due to their age. Additional costs will have to be incurred before the craft will be useful to the Colombian navy. After the boats were delivered, U.S. military officials discovered that at least two of them were missing parts and that the electrical panels on others were open, making the boats’ operational condition suspect. Furthermore, although the boats were intended to serve as command and control platforms, radios and other equipment had been removed prior to their delivery. U.S. Embassy officials do not know the total amount of funding needed to make the boats operational and to improve their combat capabilities but stated that Colombia would have to use its own resources to make them operational and combat ready. U.S. Embassy and Defense officials also expressed concern about the heavy reliance on the use of drawdown assistance for counternarcotics activities in Colombia. They stated that equipment provided under section 506(a)(2) of the Foreign Assistance Act usually requires substantial support and additional funding for operations and maintenance. They stated that such assistance was a poor substitute for a well-thought out counternarcotics assistance program and could be harmful if complementary funding was not provided. In September 1996, Congress prohibited the obligation of INC funds to assist units of foreign security forces when the Secretary of State has credible evidence to believe that these units have committed gross violations of human rights. Therefore, the State Department decided that no assistance would be provided to the military until Colombia signed an acceptable end-use monitoring agreement to ensure that the assistance was not being provided to units suspected of human rights violations. However, State did not provide guidance to the Embassy on applying human rights provisions to U.S. counternarcotics assistance until February 13, 1997, almost 4-1/2 months after the legislation was enacted. In the meantime, U.S, Embassy officials signed a preliminary end-use monitoring agreement with the Colombian Ministry of Defense on February 11, 1997. The U.S. Ambassador believed that this agreement would be acceptable to the State Department. However, because of the new guidance, he stated that he had to reopen negotiations with the Colombians. In February 1997, the U.S. Embassy determined that there were no human rights concerns about the Colombian police and satisfactory progress was being made in negotiating an end-use monitoring agreement with the Colombian navy and air force. The 12 helicopters in the drawdown assistance package were shipped to the Colombian police in May 1997. Even though the 6 boats and 20 UH-1H hulks for the Colombian navy and air force were also shipped in May, the State Department did not grant approval for use of the equipment until the Colombian navy and air force signed the end-use monitoring agreement in August 1997. According to State Department officials, the lengthy negotiations occurred because three different Colombian Ministers of Defense were involved in negotiations during this period. State Department officials told us that assistance to the Colombian army has still not been released because the Colombian army has not fulfilled terms of the agreement. Implementation of U.S. counternarcotics activities in Colombia has been hampered by a lack of planning and management coordination both within the Department of State and between State and other involved federal agencies. The State Department and the U.S. Embassy in Colombia were not well prepared to implement an expanded aerial eradication program and to support other counternarcotics activities. In addition, State did not take adequate steps to develop and integrate a $40-million assistance package for Colombian counternarcotics police and military units. Officials from the Departments of State and Defense and the U.S. Embassy said they had spent little time consulting on the makeup and appropriateness of items in the package. However, in our view, the State Department, in conjunction with the Defense Department and key elements of the U.S. Embassy, should have taken adequate time to prepare a priority list of available equipment and associated support costs before the assistance package was finalized. We recommend that the Secretary of State, in close consultation with the Secretary of Defense and the National Security Council, take steps to ensure that future assistance authorized under section 506(a)(2) of the Foreign Assistance Act of 1961 is, to the maximum extent possible, compatible with the priority requirements identified in U.S. counternarcotics programs and that adequate support resources are available to maximize the benefits of the assistance. | Pursuant to a congressional request, GAO reviewed the status of drug control efforts in Colombia and the impact of the 1996 and 1997 U.S. decisions to decertify Colombia as a drug-fighting ally, focusing on: (1) the nature of the drug-trafficking threat from Colombia; (2) the political, economic, and operational implications of the decertification decisions; and (3) U.S. efforts to plan and manage counternarcotics activities in Colombia. GAO noted that: (1) the narcotics threat from Colombia remains and may be growing, and U.S. efforts in Colombia continue to face major challenges; (2) the United States has had limited success in persuading the Colombian government to take aggressive actions to address corruption within the government, which limits its ability to arrest and convict traffickers; (3) for its part, the United States has had difficulty implementing a well-planned and coordinated strategy to assist Colombian authorities; (4) according to recent Department of State and Drug Enforcement Administration reports, the cultivation of coca leaf in Colombia increased by 50 percent between 1994 and 1996, and the prevalence of Colombian heroin on the streets of the United States has steadily increased; (5) since the initial decertification decision in March 1996, Colombia has taken several actions to address U.S. concerns; (6) at the initial decertification decision in March 1996, State was not prepared to determine whether some programmed assistance intended for the Colombian police and military could continue to be provided; (7) it took State, in conjunction with other executive branch agencies, about 8 months to decide what could be provided; (8) as a consequence, about $35 million in programmed counternarcotics assistance was cancelled or delayed; (9) however, the overall operational implications of the cutoff on U.S. and Colombian counternarcotics program is unclear; (10) the U.S. counternarcotics effort in Colombia has continued to experience management challenges; (11) State did not take adequate steps to ensure that equipment included in a 1996 $40 million assistance package from the Department of Defense inventories could be integrated into the U.S. Embassy's plans and strategies to support the Colombian police and military counternarcotics forces; (12) as a result, the assistance package contained items that had limited immediate usefulness to the Colombian police and military and will require substantial additional funding to become operational; and (13) moreover, the military assistance was also delayed for 10 months because State and the Embassy could not reach agreement with the government of Colombia over acceptable end-use provisions to ensure that the assistance was not being provided to units suspected of human rights violations. |
Funds that support terrorist activity may come from illicit activities, such as counterfeit goods, contraband cigarettes, and illicit drugs, but are also generated through means such as fundraising by legal non-profit entities. According to State, it is the terrorists’ use of social and religious organizations and, to a lesser extent, state sponsorship, which differentiates their funding sources from those of traditional transnational organized criminal groups. While actual terrorist operations require only comparatively modest funding, international terrorist groups need significant amounts of money to organize, recruit, train, and equip new adherents and to otherwise support their activities. Simply, the financing of terrorism is the financial support, in any form, of terrorism or of those who encourage, plan, or engage in it. Some international experts on money laundering continue to find that there is little difference in the methods used by terrorist groups or criminal organizations in attempting conceal their proceeds by moving them through national and international financial systems. These experts simply define the term “money laundering” as the processing of criminal proceeds to disguise their illegal origin in order to legitimize their ill-gotten gains. Disguising the source of terrorist financing, regardless of whether the source is of legitimate or illicit origin, is important to terrorist financiers. If the source can be concealed, it remains available for future terrorist financing activities. The President established a Policy Coordination Committee under the auspices of NSC to ensure the proper coordination of counter-terrorism financing activities and information sharing among all agencies including the departments of Defense, Justice, Homeland Security, State, and the Treasury, as well as the intelligence and enforcement community. Treasury’s OFAC is the lead U.S. agency for administering economic sanctions, including blocking the assets of terrorists designated either by the United States unilaterally, bilaterally, or as a result of UN Security Council Resolution designations. The international community has acted on many fronts to conduct anti- money laundering and counter-terrorism financing efforts. For example, the UN has adopted treaties and conventions that once signed, ratified, and implemented by member governments have the effect of law and enhance their ability to combat money laundering and terrorist financing. FATF, an intergovernmental body, has set internationally recognized standards for developing anti-money laundering and counter-terrorism financing regimes and conducting assessments of countries abilities to meet these standards. In addition, the Egmont Group serves as an international network fostering improved communication, information sharing, and training coordination for 101 Financial Intelligence Units (FIU) worldwide. See appendix II for more information on key international entities and efforts. Countries vulnerable to terrorist financing activities generally lack key aspects of an effective counter-terrorism financing regime. According to State officials, a capable counter-terrorism financing regime consists of five basic elements: an effective legal framework, financial regulatory system, FIU, law enforcement capabilities, and judicial and prosecutorial processes. To strengthen anti-money laundering and counter-terrorism efforts worldwide, international entities such as the UN, FATF, World Bank and the IMF, as well as the U.S. government, agree that each country should implement practices and adopt laws that are consistent with international standards. U.S. government agencies participate in a number of interdependent efforts to address the transnational challenges posed by terrorist financing, including terrorist designations, intelligence and law enforcement, international standard setting, and training and technical assistance. U.S. agencies participate in global efforts to publicly designate individuals and groups as terrorists and block access to their assets. According to Treasury officials, international cooperation to designate terrorists and block their assets is important because most terrorist assets are not within U.S. jurisdiction and may cross borders. According to U.S. government officials, public designations discourage further financial support and encourage other governments to more effectively monitor the activities of the designated individual or organization. Importantly, designations may lead to the blocking of terrorist assets, thereby impeding terrorists’ ability to raise and move funds and possibly forcing terrorist to use more costly, less efficient, more transparent, and less reliable means of financing. U.S. agencies led by State have worked with the UN to develop and support UN Security Council resolutions to freeze the assets of designated terrorists. For example, in October 1999, the Security Council adopted UN Security Council Resolution 1267, which called on all member states to freeze the assets of the Taliban, and in December 2000, the Security Council adopted Resolution 1333, imposing targeted sanctions against Osama bin Laden and al Qaeda. Then, in response to the attacks of September 11, 2001, the UN Security Council adopted Resolution 1373, which required all UN member states to freeze funds and other financial assets or economic resources of persons who commit or attempt to commit, participate in, or facilitate terrorist acts. Later in January 2002 the UN Security Council adopted Resolution 1390, which consolidated the sanctions contained in Resolutions 1267 and 1333 against the Taliban, Osama bin Laden, and al Qaeda. In July 2005, the Security Council adopted Resolution 1617, which extends sanctions against al Qaeda and the Taliban and strengthens previous related resolutions. The UN has listed over 300 individuals and over 100 entities for worldwide asset blocks. Additionally, State’s Bureau of International Organization Affairs ensures designations related to al Qaeda, the Taliban, or Osama bin Laden are made worldwide obligations through the UN Security Council Resolution 1267 Committee and helped to craft and aided the adoption of UN Security Council Resolution 1373 and assisted in the creation of the UN Counterterrorism Committee to oversee its implementation. The United States has also participated in bilateral efforts to designate terrorists. For example, as of July 2005, the United States and Saudi Arabia jointly designated over a dozen Saudi-related entities and multiple individuals as terrorists or terrorist supporters, according to State. U.S. agencies including the Departments of Homeland Security (Homeland Security), Justice, State, and Treasury, and other law enforcement and intelligence agencies have implemented an interagency process to coordinate designating terrorists and blocking their assets. For example, State’s Economic Bureau coordinates policy implementation at the working level, largely through the network of Terrorism Finance Coordinating Officers located at embassies worldwide. Through this interagency coordination, the agencies work together to develop adequate evidence to target individuals, groups, or other entities suspected of terrorism or terrorist financing. As the lead agency for the blocking of assets of international terrorist organizations and terrorism-supporting countries, Treasury’s OFAC compiles the evidence needed to support terrorist designations conducted under the Secretary of the Treasury’s authority. State’s Office of the Coordinator for Counterterrorism follows the same process for terrorist designations conducted under the Secretary of State’s authority. State’s Bureau of International Organization Affairs may present this evidence to the UN for consideration by its members. According to a senior State official, the agencies work together on a regular basis to examine and evaluate new names and targets for possible designation and asset blocking and to consider other actions such as diplomatic initiatives with other governments and exchanging information on law enforcement and intelligence efforts. The U.S. strategy to combat terrorist financing abroad includes law enforcement techniques and intelligence operations aimed at identifying criminals and terrorist financiers and their networks across borders in order to disrupt and dismantle their organizations. Such efforts include intelligence gathering, investigations, diplomatic actions, sharing information and evidence, apprehending suspects, criminal prosecutions, asset forfeiture, and other actions designed to identify and disrupt the flow of terrorist financing. According to State, in order to achieve results, the intelligence community, law enforcement, and the diplomatic corps must develop and exploit investigative leads, employ advanced law enforcement techniques, and increase cooperation between domestic and foreign financial investigators and prosecutors. U.S. intelligence and law enforcement agencies work together and with foreign counterparts abroad, sometimes employing interagency or intergovernmental investigative taskforces. U.S. agencies work domestically and through their embassy attachés or officials or send agents on temporary duty to work with their foreign counterparts on matters of terrorist financing, including investigations. The Federal Bureau of Investigation is the lead domestic law enforcement agency on counter- terrorism financing and makes extensive contributions to law enforcement efforts abroad, including through their legal attachés. Homeland Security’s Bureau of Immigration and Customs Enforcement attachés and agents conduct work in trade-based money laundering and transporting of cash across borders. The Internal Revenue Service’s Criminal Investigation Division has an expertise in nonprofit organizations. The Drug Enforcement Administration focuses on the narcotics trafficking nexus. Moreover, Treasury’s Financial Crimes Enforcement Network (FinCEN) is the U.S. government’s FIU and, as such, serves as the U.S. government’s central point for the collection, analysis, and dissemination of financial intelligence to authorized domestic and international law enforcement and other authorities. Financial intelligence is sent through secured lines among the FIUs belonging to the Egmont Group and shared with law enforcement as part of these investigations. The U.S. government has taken an active role in the development and implementation of international standards to combat terrorist financing. The UN conventions and resolutions and FATF recommendations on money laundering and terrorist financing have set the international standards for countries to develop the legal frameworks, financial regulation, financial intelligence unit, law enforcement, and judicial/prosecutorial elements of an effective counter-terrorist financing regime. Importantly, international cooperation is a cornerstone of these international standards. The United States has signed each of the relevant UN conventions and implemented its obligations pursuant to UN Security Council Resolutions related to anti-money laundering and counter-terrorism financing. According to State and Justice officials, they have provided training on implementing the conventions, and State officials have drafted UN Security Council Resolutions concerning terrorist financing. For example, according to State, officials from Treasury and State met with the UN Security Council Resolution 1267 Committee in January 2005 to detail U.S. implementation of the resolution’s asset freeze, travel ban, and arms embargo provisions and proposed several ideas aimed at reinforcing current sanctions including enhancing the sanctions list, promoting international standards, and improving bilateral and multilateral cooperation. The U.S. government also plays a major role within FATF to draft and support international standards to combat terrorist financing. Treasury’s Office of Terrorism and Financial Intelligence chairs the U.S. delegation to the FATF and has chaired or co-chaired several FATF working groups, such as the FATF Working Group on International Financial Institution Issues and the FATF Working Group on Terrorist Financing. Treasury also develops U.S. positions, represents the United States at FATF meetings, and implements actions domestically to meet the U.S. commitment to the FATF. Other components within Treasury, such as FinCEN, and other U.S. government agencies, including Homeland Security, Justice, and State, and the federal financial regulators, are also represented in the U.S. delegation to FATF. For example, according to department officials, the Department of Justice provided the initial draft for the original eight FATF special recommendations on terrorist financing. Additionally, Homeland Security gave significant input into Special Recommendation IX on Cash Couriers due to the department’s expertise on detection of criminals’ cross-border movements of cash. Moreover, the U.S. government supports efforts to ensure that countries take steps to meet FATF standards. As a member of FATF, the United States participates in mutual evaluations in which each member’s compliance with the FATF recommendations is examined and assessed by experts from other member countries. Treasury also leads U.S. delegations to FATF-style regional bodies to assist their efforts to support implementation of FATF recommendations and conduct mutual evaluations. The U.S. strategy to combat terrorist financing abroad includes efforts to provide training and technical assistance to countries that it deems vulnerable to terrorist financing and focuses on the five basic elements of an effective anti-money laundering/counter-terrorism financing regime (legal framework, financial regulation, FIU, law enforcement, and judicial and prosecutorial processes). According to State, its Office of the Coordinator for Counterterrorism is charged with directing, managing, and coordinating all U.S. government agencies’ efforts to develop and provide counter-terrorism financing programs. The NSC established the State-led interagency TFWG to coordinate the delivery of training and technical assistance to the countries most vulnerable to terrorist financing. These countries are known as priority countries of which there are currently about two dozen. According to State’s Office of the Coordinator for Counterterrorism, foreign allies inundated the U.S. government with requests for assistance; therefore, TFWG developed a process to prioritize the use of limited financial and human resources. Although other vulnerable countries may be assisted through other U.S. government programs as well as through TFWG, according to State, based on NSC guidance, overall coordination is to take place through the TFWG process. (See appendix III for TFWG membership and process.) TFWG schedules assessment trips, reviews assessment reports, evaluates training proposals, and assigns resources for training. According to State officials, the U.S. government has conducted 19 needs assessment missions and provided training and technical assistance in at least one of the five areas of an anti- money laundering/counter-terrorist financing regime to over 20 countries. U.S. offices and bureaus, primarily within the departments of the Treasury, Justice, Homeland Security, and State, and the federal financial regulators provide training and technical assistance to countries requesting assistance through various programs using a variety of methods primarily funded by State and Treasury. Methods include training courses, presentations at international conferences, the use of overseas regional U.S. law enforcement academies or U.S.-based schools, and the placement of intermittent or long-term resident advisors for a range of subject areas related to building effective counter-terrorism and anti-money laundering regimes. For example, Justice provides technical assistance on drafting legislation that criminalizes terrorist financing and anti-money laundering. Treasury’s Office of Technical Assistance (OTA) provides assistance to strengthen the financial regulatory regimes of countries. In addition, Treasury’s FinCEN provides training and technical assistance including assistance in the development of FIUs, information technology assessments, and specialized analytical software and analyst training for foreign FIUs. (See appendix IV for key U.S. counter-terrorism financing and anti-money laundering training and assistance for vulnerable countries.) According to State, the U.S. government has also worked with international donors and organizations to leverage resources to build counter-terrorism financing regimes in vulnerable countries. According to State officials, they have worked with the United Kingdom, Australia, Japan, the European Union, the Organization of American States, the Asian Development Bank (ADB), IMF, and the World Bank on regional and country-specific projects. According to State, they have also funded the UN Global Program Against Money Laundering to place a mentor in one country for a year to assist with further development of its FIU. Similarly, Treasury officials said the department funded a resident advisor to the ADB as part of the Cooperation Fund for the Regional Trade and Financial Security Initiative. Treasury officials also state they have coordinated bilateral and international technical assistance with the FATF and the international financial institutions, such as the World Bank and IMF, which encompassed the drafting of legal frameworks, building necessary regulatory and institutional systems, and developing human expertise. According to State officials, efforts to share identified priorities and coordinate assistance by the major donor countries took a step forward at the June 2003 G-8 Summit with the establishment of the Counter-Terrorism Action Group, of which the United States is a member. The Counter-Terrorism Action Group has partnered with the FATF, providing that organization with a list of countries to which its members are interested in providing counter-terrorism financing assistance, so that the FATF could assess their technical assistance needs. FATF delivered those assessments to the Counter- Terrorism Action Group in 2004 and, according to State officials, the donors are now beginning to follow through with assistance programs. The U.S. government lacks an integrated strategy to coordinate the delivery of counter-terrorism financing training and technical assistance to countries vulnerable to terrorist financing. The effort does not have key stakeholder buy-in on roles and practices, a strategic alignment of resources with needs, or a process to measure and improve performance. As a result, the effort lacks effective leadership and consistent practices, an optimal match of resources to needs, and feedback on performance into the decision-making process. U.S. interagency efforts to coordinate the delivery of counter-terrorism financing training and technical assistance lack key stakeholder involvement and acceptance of roles and procedures. As a result, the overall effort lacks effective leadership, which leads to less than optimal delivery of training and technical assistance to vulnerable countries, according to agency officials. We have previously found that building a collaborative management structure across participating organizations is an essential foundation for ensuring effective collaboration; and strong leadership is critical to the success of intergovernmental initiatives. Moreover, involvement by leaders from all levels is important for maintaining commitment. Treasury, a key stakeholder, does not accept State’s position that State leads all U.S. counter-terrorism financing training and technical assistance efforts and disagreements continue between some Treasury and State officials concerning current TFWG coordination efforts. According to State officials, State leads the U.S. effort to provide counter-terrorism financing training and technical assistance to all countries the U.S. government deems vulnerable to terrorist financing. State bases its position on classified NSC documents focused primarily on TFWG, State documents, and authorizing legislation. Treasury, an agency that also funds as well as provides training and technical assistance, asserts that State overstates its role; according to Treasury, State’s role is limited to coordinating other U.S. agencies’ provision of counter-terrorist financing training and technical assistance in commonly agreed upon TFWG priority countries, and that there are numerous other efforts outside of States’ purview. Justice, an agency that provides training and technical assistance and receives funding from State, states that it respects the role that State plays as the TFWG chairman and coordinator and states that all counter-terrorism financing training and technical assistance efforts should be brought under the TFWG decision-making process. While supportive, Justice’s statement demonstrates that the span of State’s role lacks clarity and recognition in practice. Two senior Treasury OTA officials said they strongly disagree with the degree of control State asserts over decisions at the State-led TFWG regarding the delivery of training and technical assistance. According to a Treasury Terrorist Financing and Financial Crimes (TFFC) Senior Policy Advisor who attends TFWG, in practice the TFWG process is broken and State creates obstacles rather than coordinates efforts. According to officials from State’s Office of the Coordinator for Counterterrorism, who chair TFWG, the only problems are the lack of Treasury’s TFFC and OTA officials’ acceptance of State’s leadership over counter-terrorism financing efforts and separate OTA funding. Legislation authorizing the Departments of State and Treasury to conduct counter-terrorism financing training and technical assistance activities does not explicitly designate a lead agency. State derives its authority for these activities from the International Security and Development Cooperation Act of 1985, which mandates that the Secretary of State “coordinate” all international counter-terrorism assistance. Treasury’s primary authority for its assistance programs derives from a 1998 amendment to the Foreign Assistance Act of 1961, which authorized the Secretary of the Treasury, after consultation with the Secretary of State and the Administrator of the U.S. Agency for International Development, to establish a program to provide economic and financial technical assistance to foreign governments and foreign central banks. This provision further mandates that State provide foreign policy guidance to the Secretary of the Treasury to ensure that the program is effectively integrated into the foreign policy of the United States. State and Treasury officials also disagree on procedures and practices for the delivery of counter-terrorism financing training and technical assistance. State cited NSC guidance and an unclassified State document focusing on TFWG as providing procedures and practices for delivering training and technical assistance to all countries. Treasury officials told us that the procedures and practices were only pertinent to the TFWG priority countries and that there is no formal mandate or process to provide technical assistance to countries outside the priority list. Moreover, Justice officials told us that having procedures and practices for TFWG priority countries that differ from those for other vulnerable countries creates problems. This issue is further complicated by the lack of consistent and clear delineation between the countries covered by TFWG and other vulnerable countries also receiving counter-terrorism financing and anti- money laundering assistance funded through State and Treasury. Treasury officials told us that TFWG procedures and practices are overly structured and impractical and have not been updated to incorporate stakeholder concerns and that the overall process does not function as it should. State and Treasury officials cited numerous examples of disagreements on procedures and practices. For example: State and Treasury officials disagree on the use of OTA funding and contractors. According to Treasury officials, OTA funding should primarily be used to support intermittent and long-term resident advisors, who are U.S. contractors, to provide technical assistance. According to State officials, OTA should supplement State’s program, which primarily funds current employees of other U.S. agencies. State, Justice, and Treasury officials disagree on whether it is appropriate for U.S. contractors to provide assistance in legislative drafting efforts on anti-money laundering and counter-terrorism financing laws. State officials cited NSC guidance that current Justice employees should be primarily responsible for working with foreign countries to assist in drafting such laws and voiced strong resistance to use of contractors. Justice officials strongly stated that contractors should not assist in drafting laws and gave several examples of past problems when USAID and OTA contractor assistance led to problems with the development of foreign laws. In two examples, Justice officials stated that USAID and OTA contractor work did not result in laws meeting FATF standards. In another example, Justice officials reported that a USAID contractor assisted in drafting an anti-money laundering law that had substantial deficiencies and as a result Justice officials had to take over the drafting process. According to OTA officials, their contractors provide assistance in drafting laws in non-priority countries and OTA makes drafts available to Justice and other U.S. agencies for review and comment and ultimately the host country itself is responsible for final passage of a law that meets international standards. Treasury and State officials disagree on the use of confidentiality agreements between contractors and the foreign officials they advise. State officials said OTA’s use of confidentiality agreements impedes U.S. interagency coordination. State officials said the issue created a coordination problem in one country because a poorly written draft law could not be shared with other U.S. agencies for review and resulted in the development of an ineffective anti-money laundering law. Moreover, State officials said the continued practice could present future challenges. However, according to Treasury officials, this was an isolated case involving a problem with the contract and they said they have taken procedural steps to ensure the error is not repeated. State and Treasury officials disagree on the procedures for conducting assessments of country’s needs for training and technical assistance. Moreover, Treasury stated that their major concern is with State’s coordination process for the delivery and timing of assistance. According to TFWG procedures for priority countries, if an assessment trip is determined to be necessary, State is to lead and determine the composition of the teams and set the travel dates. This is complicated when a vulnerable country becomes a priority country. For example, in November 2004 Treasury conducted an OTA financial assessment in a nonpriority frontline country and subsequently reached agreement with that country’s central bank minister to put a resident advisor in place to set up a FIU. However, in May 2005, State officials denied clearance for Treasury official’s visit to the country, which has created a delay of 2.5 months (as of the end of July 2005). Treasury officials provided documentation to show that State was aware of their intention to visit the country in November 2004 to determine counter-terrorism and financial intelligence technical assistance needs, the official leading the segment of work was part of a larger on-going OTA effort in country, and that Treasury kept TFWG informed of the results of OTA’s work and continuing efforts. State officials expressed concern that the country had recently become a priority country. According to State TFWG officials, Treasury work needed to be delayed until a TFWG assessment could be completed. However, the U.S. embassy requested that Treasury proceed with its placement of a resident advisor and that the TFWG assessment be delayed. The U.S. government does not strategically align its resources with its mission to deliver counter-terrorism financing training and technical assistance. For strategic planning to be a dynamic and inclusive process, alignment of resources is a critical element. However, the U.S. government has no clear presentation of its available resources. Further, neither the U.S. government nor TFWG has made a systematic and objective assessment of the full range of available U.S. and potential international resources. As a result, decision-makers do not know the full range of resources available to match to the needs they have identified in priority countries and to determine the best match of remaining resources to needs for other vulnerable countries. Because funding is embedded with anti-money laundering and other programs, the U.S. government does not have a clear presentation of the budget resources that the departments of State and the Treasury allocate for training and technical assistance to counter terrorist financing. State and Treasury receive separate appropriations that can be used for training and technical assistance either by the agencies themselves, by funding other agencies, or by funding contractors. State primarily transmits its training and technical assistance funds to other agencies while Treasury primarily employs short and long term advisors through personal service contracts. Although various officials told us that funding for counter- terrorism financing training and technical assistance is insufficient, the lack of a clear presentation of available budget resources makes it difficult for decision-makers to determine the actual amount allocated to these efforts. State officials told us that they have two primary funding sources for State counter-terrorism financing training and technical assistance programs: Non-Proliferation, Anti-Terrorism, Demining, and Related Programs funding, which State’s Office of the Coordinator for Counterterrorism uses to provide counter-terrorism financing training and technical assistance to TFWG countries. Based on our analysis of State records, budget authority for this account included $17.5 million for counter- terrorism financing training and technical assistance for fiscal years 2002 to 2005. International Narcotics Control and Law Enforcement funding, which State’s Bureau of International Narcotics Control and Law Enforcement uses to provide counter-terrorism financing and anti-money laundering training and technical assistance to a wide range of countries, including seven priority countries between fiscal years 2002 and 2005, as well to provide general support to multilateral and regional programs. Based on our analysis of State records, budget authority for this account included about $9.3 million for anti-money laundering, counter-terrorism financing, and related multilateral and regional activities for fiscal years 2002-2005. State officials also told us that other State bureaus and offices provide counter-terrorism financing and anti-money laundering training and technical assistance (e.g., single-course offerings or small-dollar programs) as part of regional, country-specific, or broad-based programs. Treasury officials told us that OTA’s counter-terrorism financing technical assistance is funded through its Financial Enforcement program. Based on our analysis of Treasury records, Treasury OTA received budget authority totaling about $30.3 million for all financial enforcement programs for fiscal years 2002 to 2005. Counter-terrorism financing technical assistance and training funding is embedded within this program and cannot be segregated from anti-money laundering and other anti-financial crime technical assistance. One OTA official told us that as much as one third of the funds may be spent on programs countering financial crimes other than terrorist financing in any given year. The U.S. government, including the TFWG, has not made a systematic and objective assessment of the suitability of available resources. According to State and Treasury officials, no systematic analysis has been done to evaluate the effectiveness of contractors and current employees in delivering various types of counter-terrorism training and technical assistance. Decisions at TFWG appear to be made based on anecdotal information rather than transparent and systematic assessments of resources. According to the State Performance and Accountability Report for fiscal year 2004, a shortage of anti-money laundering experts continues to create bottlenecks in meeting assistance needs of requesting nations, including priority countries. State co-chairs of TFWG repeated this concern to us. According to State officials, U.S. technical experts are particularly stretched because of their frequent need to split their time between assessment, training, and investigative missions. Moreover, officials from State’s Office of the Coordinator for Counterterrorism cited the lack of available staff as a reason for their slow start in disbursing funding at TFWG’s inception. Treasury agrees with State that there may be a shortage of anti-money laundering experts in the U.S. government agencies who are available to provide technical assistance in foreign countries, however, according to Treasury there is not a shortage of U.S. experts who are recent retirees from the same U.S. government agencies. According to OTA officials, OTA can provide contractors, who are primarily recently retired U.S. government employees with years of experience from the same agencies that provide training to priority countries through State funding. However, State officials stated strong opinions that current U.S. government employees are better qualified to provide counter-terrorism financing training and assistance than contractors. State added that it is TFWG’s policy that current U.S. government experts should be used whenever possible, and that, when they are not available, the use of contractors in those instances should be coordinated with the expert agency or office. State officials cited several examples of priority and non-priority countries in which they felt that the work of OTA’s resident advisors did not result in improvements. However, State officials praised the work of one OTA resident advisor in a priority country as a best practice, and other agency and foreign officials supported this view. Further, one State official commended the quality of OTA’s law enforcement technical assistance. Nonetheless, State officials repeatedly stated that they need OTA funding and not OTA-contracted staff to meet current and future needs. A senior OTA official said that OTA has sought actively to provide programs in more priority countries, but State, as chair of the TFWG, has not supported their efforts. Specifically, as a portion of funds that OTA has obligated for financial enforcement related assistance between fiscal years 2002 and 2005, OTA has obligated approximately 11 percent to priority countries. State officials said that they welcomed more OTA participation in priority countries as part of the mix of applicable resources; however, they questioned whether OTA consistently provides high-quality assistance. Without a systematic assessment of the suitability of resources, the decision-makers do not have good information to consider when determining the best mix of government employees and contractors to meet needs. TFWG has a stated goal to encourage allies and international entities to contribute resources to help build the counter-terrorism financing capabilities of vulnerable countries and coordinate training and technical assistance activities, but it has not developed a specific strategy to do so. No one office or organization has systematically consolidated and synthesized available information on the counter-terrorism financing training and technical assistance activities of other countries and international entities and integrated this information into its decision- making process. State and Treasury officials stated that instead they have an ad hoc approach to working with allies and international entities on resource sharing for training and technical assistance. Resource sharing is not considered a priority at TFWG meetings because U.S. officials state that interagency issues take higher priority and little time is left to discuss international activities. At one TFWG meeting, U.S. agency officials discovered that different countries and organizations were putting resources into a priority country without any central coordination. TFWG found that Australia was already providing assistance to the FIU in this priority country and cancelled the assistance it was planning to provide in this area. Without a systematic way to consolidate, synthesize, and integrate information about international activities into the U.S. interagency decision-making process, the U.S. government cannot easily capitalize on opportunities for resource sharing with allies and international entities. The U.S. government, including TFWG, does not have a system in place to measure the performance results of its efforts to deliver training and technical assistance and to incorporate this information into integrated planning efforts. Without such a system the U.S. government cannot ensure that its efforts are on track. In August 2004, we found no system in place to measure the performance of U.S. training and technical assistance to combat terrorist financing. According to an official from Justice’s Office of Overseas Prosecutorial Development, Assistance and Training (OPDAT), an interagency committee led by OPDAT was set up to develop a system to measure results. In November 2004, OPDAT had an intern set up a database to track training and technical assistance provided through TFWG and related assistance results for priority countries. Because the database was not accessible to all TFWG members, OPDAT planned to serve as the focal point for entering the data collected by TFWG members. OPDAT asked agencies to provide statistics on programs, funding, and other information, including responding to questions concerning results by function which corresponded to the five elements of an effective counter- terrorism financing regime. OPDAT also planned to track key recommendations for training and technical assistance and progress made in priority countries as provided in FATF and TFWG assessments. However, little progress has been made in further development of the performance measures as the responsible OPDAT official told us they were waiting to hire the next intern to input the data. As of July 2005, a year later, at our exit meetings with OPDAT and the State TFWG chairs, OPDAT was still waiting for an intern to be hired to complete the project. Further, OPDAT and State officials confirmed that the system had not yet been approved or implemented by TFWG and, therefore, TFWG did not have a system in place to measure the performance results of its training and technical assistance efforts and incorporate this information into its planning. Treasury faces two accountability issues related to its terrorist asset blocking efforts. First, Treasury’s OFAC reports on the nature and extent of terrorists’ U.S. assets do not provide Congress the ability to assess OFAC’s achievements. Second, Treasury lacks meaningful performance measures to assess its terrorist designation and asset blocking efforts. While Treasury has developed some limited performance measures, OFAC officials acknowledged that the measures could be improved and are in the process of developing more meaningful performance measures aided by the development of an OFAC-specific strategic plan. Treasury’s annual reports to Congress on terrorists’ assets do not provide a clear description of the nature and extent of terrorists’ assets held in the United States. Federal law requires the Secretary of the Treasury, in consultation with the Attorney General and appropriate investigative agencies, to provide an annual report to Congress “describing the nature and extent of assets held in the United States by terrorist countries and organizations engaged in international terrorism.” Each year Treasury’s OFAC provides Congress with a Terrorist Assets Report that offers a year- end snapshot of dollar amounts held in U.S. jurisdiction for two types of entities: international terrorists and terrorist organizations and terrorism- supporting governments and regimes. In 2004 OFAC reported that the United States blocked almost $10 million in assets belonging to seven international terrorist organizations and related designees. The 2004 report also noted that the United States held more than $1.6 billion in assets belonging to six designated state sponsors of terrorism. While each annual report provides year-end statistics for each of the different entities, they do not provide a clear description of the nature and extent of assets held in the United States. The reports do not make a comparison of blocked assets over the years or offer explanations for many of the significant shifts between years. For example, the 2004 report stated that the United States held $3.9 million in al Qaeda assets, but it did not state that this represented a 400 percent increase in the value of al Qaeda assets held by the United State in 2003 or offer an explanation for this increase. In addition, the reports for years 2000 to 2004 offer no explanation for the decline in the value of U.S.-held Iranian government assets, which decreased from $347.5 million in 2000 to $82 million in 2004. While the 2000 report showed that the U.S. blocked $283,000 of Hizballah assets, future reports did not name Hizballah again or explain the status of these blocked assets. Senior OFAC officials acknowledge that the Terrorist Asset Reports do not provide a clear description of the nature and extent of assets blocked and is not useful to assessing progress on asset blocking. Treasury lacks effective performance measures to assess its terrorist designation and asset blocking efforts and demonstrate how these efforts contribute to Treasury’s goals of disrupting and dismantling terrorist financial infrastructures and executing the nation’s financial sanctions policies. Among the performance measures in Treasury’s 2004 Performance and Accountability Report that are related to designations and asset blocking are: An increase in the number of terrorist finance designations for which other countries join the United States, An increase in the number of drug trafficking and terrorist-related financial sanctions targets identified and made public, and An estimated number of sanctioned entities no longer receiving funds from the United States. Treasury officials recognize that these measures do not adequately assess progress made in designating terrorists and blocking their assets. In addition, they note that these measures do not help assess how efforts to designate terrorists and block their assets contribute to Treasury’s overall goals of disrupting and dismantling terrorists’ financial infrastructure and executing the nation’s financial sanctions policies. First, these measures are not specific to terrorist financing. Two of the three measures do not separate data on terrorists from data on other entities such as drug traffickers, hostile foreign governments, corrupt regimes, and foreign drug cartels, though OFAC officials acknowledged that they could have reported the data separately. Second, Treasury officials said that progress on asset blocking cannot simply be measured by totaling an amount of blocked assets at the end of the year, as the amounts may vary over the year as assets are blocked and unblocked. Third, Treasury has not developed measures to track other activities and benefits related to terrorist designations and asset blocking. For example, according to Treasury officials, Treasury’s underlying research to identify terrorist entities and their support systems is used to aid U.S. financial regulators, law enforcement, and other officials. However, Treasury does not have measures to track the use of this research when used for other agency activities, such as law enforcement investigations. Treasury officials also stated that terrorist designations have a deterrent value by discouraging further financial support. Measuring effectiveness in terms of deterrence can be very difficult, in part because the direct impact on unlawful activity is unknown, and in part because precise metrics are hard to develop for illegal and clandestine activities. According to Treasury officials, measuring effectiveness can also be difficult because many of these efforts run across U.S. government agencies and foreign governments and are highly sensitive. Treasury’s annual report and strategic plan, however, do not address the deterrent value of designations or discuss the difficulties in measuring its effectiveness. According to the Government Performance and Results Act (GPRA) of 1993, when it is not feasible to develop a measure for a particular program activity, the executive agency shall state why it is infeasible or impractical to express a performance goal for the program activity. OFAC officials told us that they are in the process of developing better measures for assessing its efforts related to designations and asset blocking (both quantitative and qualitative) and achievements made. In addition, OFAC officials are in the process of developing a strategic plan to guide OFAC’s efforts. This strategic planning effort will help OFAC develop measures to assess how their activities, including terrorist designations and asset blocking, contribute to Treasury’s goals of disrupting and dismantling the financial infrastructure of terrorists and executing the nation’s financial sanctions policies. According to GPRA, executive agency strategic plans should include a comprehensive mission statement, a set of general goals and objectives and an explanation of how they are to be achieved, and a description of how performance goals and measures are related to the general goals and objectives of the program. OFAC officials said they have initiated efforts to develop an OFAC-specific strategic plan and performance measures. In their technical comments in response to our draft report, officials stated that the new performance measures will relate to OFAC’s research, outreach, and sanctions administration. Additionally, officials stated that they expect OFAC’s new performance measures to be completed by December 1, 2005, and its new strategic plan to be completed by January 1, 2006. However, OFAC officials did not provide us with documentation to demonstrate that they have established milestones or a completion date to accomplish these projects. Without a strategy that integrates the funding and delivery of training and technical assistance by State and Treasury’s OTA, the U.S. government will not maximize the use of its resources in the fight against terrorist financing. Meanwhile, due to disagreements over leadership and procedures, some energy and talent of staff are wasted trying to resolve interagency disputes. By making decisions based on anecdotal and informal information rather than transparent and systematic assessments, managers cannot effectively address problems before they grow and become crises. Moreover, given the scarce expertise available to address counter-terrorism financing, by not focusing efforts on how all available U.S. and international resources can be integrated into a U.S. strategy the U.S. government may miss opportunities to leverage resources. Finally, without dedicating resources to complete a performance measurement system, the State-led TFWG effort does not have the information needed for optimal coordination and planning. The lack of accountability for Treasury’s designations and asset blocking program creates uncertainty about the department’s progress and achievements. U.S. officials with oversight responsibilities need meaningful and relevant information to ascertain the progress, achievements, and weaknesses of U.S. efforts to designate terrorists and dismantle their financial networks as well as hold managers accountable. Meaningful information may also help these officials understand the importance of asset blocking in the overall U.S. effort to combat terrorist financing as well as make resource allocation decisions across programs. The development of a strategic plan for OFAC could help facilitate the development of meaningful performance measures. To ensure that U.S. government interagency efforts to provide counter- terrorism financing training and technical assistance are integrated and efficient, particularly with respect to priority countries, we recommend that the Secretary of State and the Secretary of the Treasury, in consultation with NSC and relevant government agencies, develop and implement an integrated strategic plan for the U.S. government that does the following: designates leadership and provides for stakeholder involvement; includes a systematic and transparent assessment of U.S. government delineates a method for aligning the resources of relevant U.S. agencies to support the mission; and provides processes and resources for measuring and monitoring results, identifying gaps, and revising strategies accordingly. To ensure a seamless campaign in providing counter-terrorism financing training and technical assistance programs to vulnerable countries, we recommend that the Secretaries of State and the Treasury enter into a Memorandum of Agreement concerning counter-terrorism financing and anti-money laundering training and technical assistance. The agreement should specify: the roles of each department, bureau, and office with respect to conducting needs assessments and delivering training and technical assistance; methods to resolve disputes concerning OTA’s use of confidentiality agreements in its contracts when providing counter-terrorism financing and anti-money laundering assistance; and coordination of funding and resources for counter-terrorism financing and anti-money laundering training and technical assistance. To ensure that policy makers and program managers are able to examine the overall achievements of U.S. efforts to block terrorists’ assets, we also recommend that the Secretary of the Treasury provide in its annual Terrorist Assets Report to Congress more complete information on the nature and extent of asset blocking in the United States. Specifically, the report should include such information as the differences in amounts blocked between the years, when and why assets were unfrozen, the achievements and obstacles faced by the U.S. government, and a classified annex if necessary. In addition, as part of the Treasury’s ongoing strategic planning efforts, we recommend that the Secretary of the Treasury complete efforts to develop an OFAC-specific strategic plan and meaningful performance measures by January 1, 2006, and December 1, 2005 respectively, to guide and assess its asset blocking efforts. In view of congressional interest in U.S. government efforts to deliver training and technical assistance abroad to combat terrorist financing and the difficulty in obtaining a systematic assessment of U.S. resources dedicated to this endeavor, Congress should consider requiring the Secretary of State and the Secretary of the Treasury to submit an annual report to Congress on the status of the development and implementation of the integrated strategic plan and Memorandum of Agreement. We provided draft copies of this report to the Departments of Defense, Homeland Security, Justice, State, and Treasury for review. We received comments from the Departments of Justice, State, and the Treasury (see apps. V, VI, and VII). We did not receive agency comments from the Departments of Defense or Homeland Security. State did not agree with our recommendation that the Secretaries of State and Treasury, in consultation with the NSC and relevant government agencies, develop and implement an integrated strategic plan to coordinate the delivery of training and technical assistance abroad. State asserted that it has an integrated strategic plan and believes that a series of NSC documents and State’s Office of the Coordinator for Counterterrorism’s Bureau Performance Plan serve this purpose. We reviewed the NSC documentation which included minutes, an agreement, and conclusions, all of which serve as the NSC guidance for the TFWG. We also reviewed State’s Office of the Coordinator for Counterterrorism’s Bureau Performance Plan which we found included the Bureau’s objectives and performance measures for counterterrorist financing programs. We do not agree that this NSC guidance and Bureau performance plan constitute an integrated strategy that addresses the issues raised in this report because the effort, in practice, does not have key stakeholder buy-in on roles and practices, a strategic alignment of resources with needs, or a system to measure performance and use results and thus, an integrated strategy is still needed. It is also noteworthy that Treasury did not state in their comments that an integrated strategic plan existed or was in place, and they did not highlight these specific documents as serving this purpose. Treasury did not directly address our recommendation for an integrated strategic plan and proposed a new title, “Integrated U.S. Strategic Plan Needed to Improve the Coordination of Counterterrorism Finance Training and Technical Assistance to Certain Priority Countries,” which suggests agreement with the recommendation, but limits coverage of the integrated strategic plan to cover certain priority countries. Treasury also stated its agreement with the need for performance measures. It is useful to note that Treasury repeatedly placed the focus of efforts for improvement on priority countries and, as noted in its technical comments, does not recognize State’s leadership over the delivery of training and technical assistance other than to priority countries. For example, in Treasury’s technical comments Treasury stated that “State’s role is coordinating each U.S. government agency’s personnel and expertise to allow them to deliver the needed training in commonly agreed upon priority countries.” This comment further supports the need to better integrate efforts. Justice stated that with its role and expertise in providing training and technical assistance the fact that it was not included as an equal partner with State and Treasury in the recommendation was a critical omission. We note that Justice is one of a number of agencies referred to as relevant government agencies in the recommendation. Justice receives funding from State and, according to Justice, State has been supportive of Justice’s training and technical assistance efforts. State did not agree with our recommendation that the Secretaries of State and Treasury enter into a Memorandum of Agreement concerning counter- terrorism financing and anti-money laundering training and technical assistance. State stated that they have an interagency agreement. Based on our review, the classified document serving as an interagency agreement lacks clarity, familiarity, and buy-in from all levels of leadership within TFWG, particularly Treasury. State added that if there were to be a Memorandum of Agreement, they believe it should include all agencies engaged in providing training and technical assistance, not just State and Treasury. Treasury did not address this recommendation. However, Treasury stated that it wished to improve the effectiveness of U.S. technical assistance to combat terrorist financing particularly with respect to certain priority countries and stated that they would welcome suggestions as to how Treasury, together with relevant U.S. government agencies, can better achieve that goal. Justice again stated that the report’s critical flaw is omitting Justice from equal standing with State and Treasury. Justice noted that it is a key player and therefore should be involved in all interagency deliberations and decisions. We continue to believe that the Memorandum of Agreement should include the Secretaries of State and Treasury. State and Treasury both primarily fund and support U.S. government anti-money laundering and counter-terrorist financing training and technical assistance programs, and in Treasury’s case also provides considerable training and technical assistance abroad through current U.S. government employees and contractors. It is important that their programs and funding are integrated to optimize results. Other agencies are important stakeholders as they are recipients of this funding and support and should benefit from improved coordination between these two agencies. In response to our recommendation that the Secretary of the Treasury provide more complete information on the nature and extent of asset blocking in the United States in its annual Terrorist Assets Report to Congress, Treasury responded in its technical comments that we should “instead recommend that Congress consider discontinuing the requirement that Treasury produce the annual report altogether.” Treasury officials also stated that the Terrorist Assets Reports, “based upon the input of numerous government agencies, provides a snapshot of the known assets held in the United States by terrorist-supporting countries and terrorist groups at a given point in time. These numbers may fluctuate during each year and between years for a number of policy-permissible reasons. The amount of assets blocked under a terrorism sanctions program is not a primary measure of a terrorism sanctions program’s effectiveness, and countries that have been declared terrorist supporting, and whose assets are not blocked by a sanctions program, are already weary of holding assets in the United States.” Moreover, in its technical comments Treasury states that Terrorist Assets Reports were “not mandated or designed as an accountability measure for OFAC’s effectiveness in assisting U.S. persons in identifying and blocking assets of persons designated under relevant Executive orders relating to terrorism.” We acknowledge that the language in the mandate for the Terrorist Assets Reports did not explicitly designate the reports as an accountability measure; however, nothing in the statutory language or in the congressional intent underlying the mandate precludes Treasury from compiling and reporting information in the manner in which we have suggested in this report. Furthermore, we believe that inclusion of comparative information and additional explanation regarding significant shifts between years will enhance program reporting and congressional oversight. Justice did not comment on this recommendation. State commented that this recommendation was incomplete in that it makes no mention of State’s role in blocking assets and promoting international cooperation to achieve it; however, we did not include State in this recommendation because it is the Secretary of the Treasury who is responsible for producing the annual Terrorist Assets Reports. Treasury’s technical comments state that “OFAC officials have advised that OFAC’s new performance measures are expected to be completed by December 1, 2005, and its new strategic plan is expected to be completed by January 1, 2006.” We modified our recommendation to incorporate this new information. State suggested in its technical comments that we revise this recommendation to read, “In addition, we recommend that the Secretary of the Treasury, in consultation with the Departments of State and Justice and the other departments and agencies represented on the Terrorist Finance Policy Coordination Committee, establish milestones for developing a strategic plan and meaningful performance measures to guide and asses its asset blocking process.” We did not include the Secretary of State or the Attorney General in this recommendation because the scope of this objective focused solely on the accountability issues Treasury faces in its efforts to block terrorists’ assets. However, we recognize that State has an important role in targeting individuals, groups, or other entities suspected of terrorism or terrorist financing and added language to the section of the report on terrorist designations to clarify the roles of the multiple agencies involved in this effort. Treasury’s comments also suggested that we replace, in its entirety, our report’s third objective on the accountability of Treasury’s terrorist asset blocking efforts with revised text that Treasury officials had prepared. We reviewed the revised text and noted that many of Treasury’s points were already covered in our report. In some cases we added technical information to our report to help clarify the challenges that Treasury faces in assessing the impact of terrorist designation activities. None of these agencies provided comments on our matter for congressional consideration. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Attorney General, the Secretary of Defense, the Secretary of the Homeland Security, the Secretary of State, the Secretary of the Treasury, and interested congressional committees. We also will make copies available to others upon request. In addition the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Chairman of the Senate Caucus on International Narcotics Control, Charles E. Grassley; Senator Richard J. Durbin; and Chairman of the Senate Committee on Homeland Security and Governmental Affairs, Senator Susan M. Collins, asked us to (1) provide an overview of U.S. government efforts to combat terrorist financing abroad and (2) examine U.S. government efforts to coordinate the delivery of training and technical assistance to vulnerable countries. In addition, they requested that we examine specific accountability issues the Department of the Treasury (Treasury) faces in its efforts to block terrorists’ assets held under U.S. jurisdiction. To provide an overview of U.S. government efforts to combat terrorist financing abroad we reviewed documents and interviewed officials of U.S. agencies and departments and their bureaus and offices. We reviewed legislation, strategic plans, performance plans, and other agency documents, as well as relevant papers, studies, CRS and our own work to identify specific agency responsibilities and objectives. We assessed this information to identify key efforts and obtain further details and clarification and then validated and deconflicted information across agencies and departments in the United States and overseas in Indonesia, Pakistan, and Paraguay. We based country selection on Department of State (State) reporting of a nexus of terrorist financing, State reporting of assistance to the country, and the use of alternative financing mechanisms in the country. In each country, we discussed key challenges with responsible foreign and U.S. embassy officials, as well as with international entity officials. We grouped the different types of responsibilities into four categories (designations, intelligence and law enforcement, standards setting, or training) and validated these categories during meetings with U.S. government officials. Our scope and methodology were limited by lack of complete access to sensitive and classified information. We reviewed documents or interviewed officials from the following U.S. departments and agencies: the Central Intelligence Agency; the Department of Defense (Defense Intelligence Agency); the Department of Homeland Security (Immigration and Customs Enforcement and Customs and Border Protection); the Department of Justice (Bureau of Alcohol, Tobacco, Firearms, and Explosives; Criminal Division’s Asset Forfeiture and Money Laundering Section, Counter Terrorism Section, and Office of Overseas Prosecutorial Development, Assistance and Training; Drug Enforcement Administration; Federal Bureau of Investigation); the Department of State (Bureau of Economic and Business Affairs; Bureau for International Narcotics and Law Enforcement Affairs; Office of the Coordinator for Counterterrorism; Bureau of International Organizations; U.S. Mission to the United Nations; U.S. Agency for International Development; U.S. Missions to Indonesia, Pakistan, and Paraguay); the Department of the Treasury (Office of Technical Assistance, Office of Foreign Assets Control, Financial Crimes Enforcement Network, the Office of Terrorist Financing and Financial Crime, IRS’s Criminal Investigation Division). We also verified U.S. government efforts through documentation or interviews with officials from international entities including the Financial Action Task Force on Money Laundering, the International Monetary Fund (IMF), the World Bank, the United Nations (UN), and the Organization of American States. To examine U.S. government efforts to coordinate the delivery of training and technical assistance to vulnerable countries, we examined relevant laws; reports to Congress; National Security Council (NSC) guidance; strategic plans; policies and procedures; budget and expenditure information; agency and international entity training data, documents, and reports; contractor resumes; communications between embassies and agencies; interagency communications; web site information; and GAO criteria for strategic planning, collaboration, and performance results. In conjunction we interviewed U.S. agency officials involved in the Terrorist Financing Working Group (TFWG), U.S. officials involved in the delivery of training and technical assistance abroad, and others with a stake in counter-terrorism financing training and technical assistance, including officials of international entities, foreign government officials, and experts. We also observed a TFWG meeting. We requested an interview with the NSC, but the NSC declined our request. We assessed U.S. efforts to coordinate its efforts to deliver training and technical assistance to vulnerable countries using applicable elements of a sound strategic plan and identified those areas in which the U.S. effort is lacking. We assessed documentation and interviewed officials from: the Department of Homeland Security (Immigration and Customs the Department of Justice (Criminal Division’s Asset Forfeiture and Money Laundering Section, Counter Terrorism Section, and Office of Overseas Prosecutorial Development, Assistance and Training; Federal Bureau of Investigation); the Department of State (Bureau for International Narcotics and Law Enforcement Affairs, Office of the Coordinator for Counter-terrorism, Bureau of International Organizations, U.S. Mission to the United Nations, U.S. Agency for International Development; three U.S. embassies abroad) the Department of the Treasury (Office of Technical Assistance, Office of Foreign Assets Control, Financial Crimes Enforcement Network, the Executive Office for Terrorist Financing and Financial Crime, IRS’s Criminal Investigation Division); the Financial Action Task Force on Money Laundering (FATF); International financial institutions including the International Monetary Fund (IMF), World Bank, Asian Development Bank (ADB); and Inter- American Development Bank; the United Nations (UN), including the Counter Terrorism Committee and relevant UN Security Council resolutions sanctions committees and monitoring mechanisms; and the Organization of American States. To examine specific issues the U.S. government faces in holding Treasury accountable for its efforts to block terrorists’ assets held in the United States, we interviewed officials from the Department of the Treasury’s Office of Foreign Assets Control (OFAC) in Washington, D.C. We reviewed applicable laws, regulations, and executive orders to determine reporting requirements. In addition, we examined OFAC’s annual Terrorist Assets Reports for calendar years 1999 to 2004. Our examination focused on comparing the nature and extent of blocked assets by year for OFAC’s three programs targeting international terrorists and terrorist organizations and five programs targeting terrorism-supporting governments and regimes to understand how OFAC communicated changes in an organization or country’s blocked assets over time. We also compared and contrasted the performance measures for designation and asset blocking included in Treasury’s Strategic Plan for fiscal years 2003-2008 with those indicated in its Annual Performance and Accountability Report fiscal years 2003 and 2004. We reviewed testimony and speeches by OFAC and other Treasury officials, as well as information from OFAC’s website, to learn more about key issues and progress made on designating terrorists and blocking their assets. We reviewed relevant information from the Congressional Research Service and our own work. To assess the extent that Treasury’s performance measures for designating terrorists and blocking assets focused on factors critical to assessing performance, we reviewed a range of our previous reports examining factors that were necessary components for meaningful measures. We performed our work from March 2004 through July 2005 in accordance with generally accepted government auditing standards. United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances (1988) (The Vienna Convention): Defines concept of money laundering. Most widely accepted definition. Calls upon countries to criminalize the activity. Limited to drug trafficking as predicate offenseand does not address the preventative aspects. International Convention Against Transnational Organized Crime (2000) (The Palermo Convention): Came into force in September 2003. Obligates ratifying countries to criminalize money laundering via domestic law and include all serious crimes as predicate offenses of money laundering, whether committed in or outside of the country, and permit the required criminal knowledge or intent to be inferred from objective facts; establish regulatory regimes to deter and detect all forms of money laundering, including customer identification, recordkeeping, and reporting of suspicious transactions; authorize the cooperation and exchange of information among administrative, regulatory, law enforcement, and other authorities, both domestically and internationally; consider the establishment of a financial intelligence unit to collect, analyze, and disseminate information; and promote international cooperation. International Convention for the Suppression of the Financing of Terrorism (1999): Came into force in 2002. Requires ratifying countries to criminalize terrorism, terrorist organizations, and terrorist acts. Unlawful for any person to provide or collect funds with the intent that the funds be used for, or knowledge that the funds be used to conduct certain terrorist activity. Encourages states to implement measures that are consistent with FATF Recommendations. Security Council Resolutions 1267 and 1390: Adopted October 15, 1999 and January 16, 2002, respectively. Obligates member states to freeze assets of individuals and entities associated with Osama bin Ladin or members of al Qaeda or the Taliban that are included on the consolidated list maintained and regularly updated by the UN 1267 Sanctions Committee. Security Council Resolution 1373: Adopted September 28, 2001, in direct response to events of September 11, 2001. Obligates countries to criminalize actions to finance terrorism and deny all forms of support, freeze funds or assets of persons, organizations, or entities involved in terrorist acts; prohibit active or passive assistance to terrorists; and cooperate with other countries in criminal investigations and sharing information about planned terrorist acts. Security Council Resolution 1617: Adopted July 29, 2005. Extended sanctions against al Qaeda, Osama bin Laden, and the Taliban, and strengthened previous related resolutions. Convention Against Corruption (2003) Not yet in force--First legally binding multilateral treaty to address on a global basis the problems relating to corruption. As of July 11, 2005, 29 countries had become parties to the Convention (30 are required for the Convention to enter into force). Requires parties to institute a comprehensive domestic regulatory and supervisory regime for banks and financial institutions to deter and detect money laundering. Regime must emphasize requirements for customer identification, record keeping, and suspicious transaction reporting. Global Program Against Money Laundering: research and assistance project offering technical expertise, training, and advice to member countries on anti-money laundering and counter-terrorism financing upon request to raise awareness; help create legal frameworks with the support of model legislation; develop institutional capacity, in particular with the creation of financial intelligence units; provide training for legal, judicial, law enforcement, regulators and private financial sectors including computer-based training; promote regional approach to addressing problems; maintain strategic relationships; and maintain database and perform analysis of relevant information. The Counter Terrorism Committee (CTC): Established via Security Council Resolution1373 to monitor the performance of the member countries in building a global capacity against terrorism. Countries submit a report to the CTC on steps taken to implement resolution’s measures and report regularly on progress. CTC asked each country to perform a self-assessment of existing legislation and mechanism to combat terrorism in relation to Resolution 1373. CTC identifies weaknesses and facilitates assistance, but does not provide direct assistance. Financial Action Task Force on Money Laundering (FATF): Formed in 1989 by the G-7 countries, FATF is an intergovernmental body comprised of 31 member jurisdictions and two regional organizations whose purpose is to develop and promote policies, both at the national and international levels, to combat money laundering and the financing of terrorism. Its mission expanded to include counter- terrorism financing in October 2001. FATF has developed multiple partnerships with international and regional organizations in order to constitute a global network of organizations against money laundering and terrorist financing. The 40 Recommendations on Money Laundering: Constitute a comprehensive framework for anti-money-laundering designed for universal application. Permit country flexibility in implementing the principles according to the country’s own particular circumstances and constitutional requirements. Although not binding as law, have been widely endorsed by international community and relevant organizations as the international standard for anti-money laundering. The Special Recommendations on Terrorist Financing: FATF adopted eight special recommendations and recently added a ninth. FATF members use a self-assessment questionnaire of their country’s actions to come into compliance. The nine deal with both formal banking and non-banking systems: Ratification and implementation of UN instruments Criminalize the financing of terrorism and associated money laundering Freeze and confiscate terrorist assets Reporting suspicious transactions related to terrorism International co-operation Impose anti-money laundering requirements on alternative remittance systems Strengthen customer identification measures in international and domestic wire transfers Ensure that non-profit organizations are not misused Detecting and preventing cross-border transportation of cash by terrorists and other criminals. The Non-Cooperative Countries and Territories (NCCT) List: One of FATF’s objectives is to promote the adoption of international anti-money laundering/counter-terrorism financing standards by all countries. Thus, its mission extends beyond its own membership, although FATF can only sanction its member countries and territories. Thus, in order to encourage all countries to adopt measures to prevent, detect, and prosecute money launderers (i.e., to implement the 40 Recommendations) FATF adopted a process to identify non-cooperative countries and territories that serve as obstacles to international cooperation in this area and place them on a public list. An NCCT country is encouraged to make rapid progress in remedying its deficiencies or counter-measures may be imposed which may include specific actions by FATF member countries. Most countries make a concerted effort to be taken off the NCCT list because it causes significant problems to their international business and reputation. Monitoring Member’s Progress: Facilitated by a two-stage process: self assessments and mutual evaluations. In the self-assessment stage, each member annually responds to a standard questionnaire regarding its implementation of the recommendations. In the mutual evaluation stage, each member is examined and assessed by experts from other member countries. Ultimately, if a member country does not take steps to achieve compliance, membership in the organization can be suspended. There is, however, a sense of peer pressure and a process of graduated steps before these sanctions are enforced. Methodology for Anti-money laundering/Counter-terrorist Financing Assessments: FATF developed and adopted a comprehensive mutual assessment methodology for the 40 and special recommendations based on consultations with IMF, World Bank, and other standard setters, which provides international agreement and cooperation among standard setters and others for a methodology for assessing anti-money-laundering/counter terrorist- financing regimes based on the 40 and special recommendations. Typologies Exercise: FATF issues annual reports on developments in money laundering through its typologies report, which keeps countries current with new techniques or trends. International Monetary Fund (IMF) and World Bank: World Bank helps countries strengthen development efforts by providing loans and technical assistance for institutional capacity building. The IMF mission involves financial surveillance and the promotion of international monetary stability. Research and Analysis and Awareness-Raising: Conducted work on international practices in implementing anti-money laundering and counter-terrorist financing regimes; issued Analysis of the Hawala System discussing implications for regulatory and supervisory response; and developed comprehensive reference guide on anti-money- laundering/counter terrorist-financing presenting all relevant information in one source. Conducted Regional Policy Global Dialogue series with country, World Bank and IMF, development banks, and FATF-style regional bodies covering challenges, lessons learned, and assistance needed; and developed Country Assistance Strategy that covers anti-money laundering and counter-terrorism in greater detail in countries that have been deficient in meeting international standards. Assessments: Worked in close collaboration with FATF and FATF-style regional bodies to a produce single comprehensive Methodology for anti-money laundering/counter-terrorist financing assessments; and engaged in a successful pilot program of assessments of country compliance with FATF recommendations. In 2004, adopted the FATF 40 and special 9 recommendations as one of the 12 standards and codes for which Reports on the Observance of Standards and Codes can be prepared and made anti-money laundering/counter-terrorist financing assessments a regular part of IMF/World Bank work. World Bank and IMF staff participated in 58 of the 92 assessments conducted since 2002. Training and Technical Assistance: Organized training conferences and workshops, delivered technical assistance to individual countries, and coordinated technical assistance. Substantially increased technical assistance to member countries on strengthening legal, regulatory, and financial supervisory frameworks for anti-money-laundering/counter terrorist- financing. In 2002-2003 there were 85 country-specific technical projects benefiting 63 countries and 32 projects reaching more than 130 countries. Between January 2004 and June 2005 the World Bank and IMF delivered an additional 210 projects. In 2004, IMF and the World Bank decided to expand the anti-money laundering/counter-terrorist financing technical assistance work to cover the full scope of the expanded FATF recommendations following the successful pilot program of assessments. Egmont Group of Financial Intelligence Units: A forum for Financial Intelligence Units (FIU) to improve support for their respective national anti-money laundering and counter-terrorism financing programs. In June 2005 there were 101 member countries. The group fosters development of FIUs and the exchange of critical financial data among the FIUs. The group is involved in improving interaction among FIUs in the areas of communications, information sharing, and training coordination. The Egmont Group’s Principles for Information Exchange Between Financial Intelligence Units for Money Laundering Cases include conditions for the exchange of information, limitation on permitted uses of information, and confidentiality. Members of the Egmont Group have access to a secure private website to exchange information. As of 2004, 87 of the members were connected to the secure web. The group has produced a compilation of one hundred sanitized cases about the fight against money laundering from its member FIUs. Within the group there are five working groups—Legal, Outreach, Training/Communications, Operations, and Information Technology. The Egmont group is focusing on expanding its membership in the Africa and Asia regions. Counterterrorism Action Group (CTAG): CTAG includes the G-8 (Canada, France, Germany, Italy, Japan, Russia, the United Kingdom, and the United States) as well as other states, mainly donors, to expand counterterrorism capacity building assistance. CTAG goals are to analyze and prioritize needs and expand training and assistance in critical areas including counter-terrorism financing and other counterterrorism areas. CTAG also plans to work with the UN Counter-Terrorism Committee to promote implementation of Security Council Resolution 1373. In 2004, CTAG coordinated with FATF to obtain assessments of countries CTAG identified as priorities. FSRBs encourage implementation and enforcement of FATF’s 40 recommendations and special recommendations. They administer mutual evaluations of their members, which are intended to identify weaknesses so that the member may take remedial action. They provide members information about trends, techniques, and other developments for money laundering in their typology reports. The size, sophistication, and the degree to which the FSRBs can carry out their missions vary greatly. The FSRBs are Asia/Pacific Group on Money Laundering, Caribbean Financial Action Task Force, Council of Europe MONEYVAL, Eastern and Southern African Anti-Money Laundering Group, Eurasian Group on Combating Money Laundering and Financing of Terrorism, Financial Action Task Force Against Money Laundering in South America, Middle East and North Africa Financial Action Task Force, Inter-governmental Action Group Against Money Laundering (West Africa). Organization of American States— CICAD: Regional body for security and diplomacy in the Western Hemisphere with 34 member states. In 2004, the commission amended model regulations for the hemisphere to include techniques to combat terrorist financing, development of a variety of associated training initiatives, and a number of anti-money laundering/counter-terrorism meetings. Its Mutual Evaluation Mechanism included updating and revising some 80 questionnaire indicators through which the countries mutually evaluate regional efforts and projects. Worked with International Development Bank and France to provide training for prosecutors and judges. Based on agreement with Inter-American Development Bank for nearly $2 million, conducting two-year project to strengthen FIUs in eight countries. Evaluating strategic plans and advising on technical design for FIUs in region. Asian Development Bank (ADB): Established in 1966, the ADB is a multilateral development finance institution dedicated to reducing poverty in Asia and the Pacific. The bank is owned by 63 members, mostly from the region and engages in mostly public sector lending in its developing member countries. According to the ADB, it was one of the first multilateral development banks to address the money laundering problem, directly and indirectly, through regional and country assistance programs. The ADB Policy Paper, adopted on April 1, 2003, has three key elements: (1) assisting developing member countries in establishing and implementing effective legal and institutional systems for anti-money laundering and counter-terrorism financing, (2) increasing collaboration with other international organizations and aid agencies, and (3) strengthening internal controls to safeguard ADB's funds. The bank provides loans and technical assistance for a broad range of development activities including strengthening and developing anti-money laundering regimes. Basel Committee on Banking Supervision: Established by the central bank Governors of the Group of Ten countries in 1974, formulates broad supervisory standards and guidelines and recommends statements of best practice in the expectation that individual authorities will take steps to implement them through detailed arrangements - statutory or otherwise - which are best suited to their own national systems. Three of the Basel Committee’s supervisory standards and guidelines concern money laundering issues: (1) Statement on Prevention of Criminal Use of the Banking System for the purpose of Money Laundering (1988), which outlines basic policies and procedures that bank managers should ensure are in place; (2) Core Principles for Effective Banking Supervision (1997), which provides a comprehensive blueprint for an effective bank supervisory system and covers a wide range of topics including money laundering; and (3) Customer Due Diligence (2001), which also strongly supports adoption and implementation of the FATF recommendations. Anti-Money Laundering Guidance Notes for Insurance Supervisors and Insurance Entities (2002) is a comprehensive discussion on money laundering in the context of the insurance industry. Guidance is intended to be implemented by individual countries taking into account the particular insurance companies involved, the products offered within the country, and the country’s own financial system. Consistent with FATF 40 Recommendations and Basel Core Principles for Effective Banking Supervision. Paper was updated as Guidance Paper on Anti- Money Laundering and Combating the Financing of Terrorism (2004) with cases of money laundering and terrorist financing. A document based upon these cases is posted on Web site and updated, and new cases that might result from the FATF typology project are to be added. International Organization of Securities Commissions (IOSCO): Members regulate and administer securities and laws in their respective 105 national securities commissions. Core objectives are to protect investors; ensure that markets are fair, efficient, and transparent; and reduce systematic risk. Passed “Resolution on Money Laundering” in 1992. Principles on Client Identification and Beneficial Ownership for the Securities Industry (2004) is a comprehensive framework relating to customer due diligence requirements and complementing the FATF 40 recommendations. IOSCO and FATF have discussed further steps to strengthen cooperation among FIUs and securities regulators in order to combat money laundering and terrorist financing. laundering. Kingdom, and the United States. According to the State, TFWG is made up of various agencies throughout the U.S. government and convened in October 2001 to develop and provide counter-terrorism finance training to countries deemed most vulnerable to terrorist financing. TFWG is co-chaired by State’s Office of the Coordinator for Counterterrorism and the Bureau for International Narcotics and Law Enforcement Affairs and meets on a bi-weekly basis to receive intelligence briefings, schedule assessment trips, review assessment reports, and discuss the development and implementation of technical assistance and training programs. According to State the process is as follows: 1. With input from the intelligence and law enforcement communities, State, Treasury, and the Department of Justice (Justice), identify and prioritize countries needing the most assistance to deal with terrorist financing. 2. Evaluate priority countries’ counter-terrorism finance and anti-money laundering regimes with Financial Systems Assessment Team (FSAT) onsite visits or Washington tabletop exercises. State-led FSAT teams of 6-8 members include technical experts from State, Treasury, Justice, and other regulatory and law enforcement agencies. The FSAT onsite visits take about one week and include in-depth meetings with host government financial regulatory agencies, the judiciary, law enforcement agencies, the private financial services sector, and non- governmental organizations. 3. Prepare a formal assessment report on vulnerabilities to terrorist financing and make recommendations for training and technical assistance to address these weaknesses. The formal report is shared with the host government to gauge its receptivity and to coordinate U.S. offers of assistance. 4. Develop counter-terrorism financing training implementation plan based on FSAT recommendations. Counter-terrorism financing assistance programs include financial investigative training to “follow the money,” financial regulatory training to detect and analyze suspicious transactions, judicial and prosecutorial training to build financial crime cases, financial intelligence unit development, and trade-based money laundering for over/under invoicing schemes for money laundering or terrorist financing. 5. Provide sequenced training and technical assistance to priority countries in-country, regionally, or in the United States. 6. Encourage burden sharing with our allies, international financial institutions (e.g., IMF, World Bank, regional development banks), and through international organizations such as the UN, United Nations, the UN Counterterrorism Committee, FATF on Money Laundering, or the Group of Eight (G-8) to capitalize on and maximize international efforts to strengthen counter-terrorism finance regimes around the world. International Law Enforcement Academies (ILEAs). Regional academies led by U.S. agencies partnering with foreign governments to provide law enforcement training including anti-money laundering and counter-terrorism financing. ILEAs in Gaborone, Botswana; Bangkok, Thailand; Budapest, Hungary; and Roswell, New Mexico, train over 2,300 participants annually on topics such as criminal investigations, international banking and money laundering, drug-trafficking, human smuggling, and cyber-crime. Provides financial regulatory training and technical assistance to central banks, foreign banking supervisors, and law enforcement officials in Washington, D.C. and abroad, and participates in U.S. interagency assessments of foreign government vulnerabilities. Provides financial regulatory training through seminars and regional conference presentations in Washington, D.C. and abroad, and participates in U.S. interagency assessments of foreign government vulnerabilities. Provides law and border enforcement training and technical assistance to foreign governments, in conjunction with other U.S. law enforcement agencies and the ILEAs. Participates in assessments of foreign countries in the law and border enforcement arena. Assists in the drafting of money laundering, terrorist financing, and asset forfeiture legislation compliant with international standards for international and regional bodies and foreign governments. Provides legal training and technical assistance to foreign prosecutors and judges, in conjunction with Justice’s Office of Overseas Prosecutorial Development, Training and Assistance. Sponsors conferences and seminars on transnational financial crimes such as forfeiting the proceeds of corruption, human trafficking, counterfeiting, and terrorism. Participates in U.S. interagency assessments of countries’ capacity to block, seize, and forfeit terrorist and other criminal assets. Provides investigative and prosecutorial training and technical assistance to foreign investigators, prosecutors, and judges in conjunction with the Office of Overseas Prosecutorial Development, Training, and Assistance and other Department of Justice components. Provides law enforcement training on international asset forfeiture and anti-money laundering to foreign governments, in conjunction with other Department of Justice components and through ILEAs. Provides basic and advanced law enforcement training to foreign governments on a bilateral and regional basis and through ILEAs and the Federal Bureau of Investigation’s Academy in Quantico, Virginia. Developed a two-week terrorist financing course that was delivered and accepted as the U.S. government’s model. Participates in U.S. interagency assessments of countries’ law enforcement and counter-terrorism capabilities. Provides law enforcement training and technical assistance to foreign counterparts abroad in conjunction with other Department of Justice components. Provides legal and prosecutorial training and technical assistance for criminal justice sector counterparts abroad and through ILEAs in drafting anti-money laundering and counter-terrorism financing statutes. Provides Resident Legal Advisors to focus on developing counter-terrorism legislation that criminalizes terrorist financing and achieves other objectives. Conducts regional conferences on terrorist financing, including a focus on charitable organizations. Participates in U.S. interagency assessments to determine countries’ criminal justice system capabilities. Coordinate and fund U.S. training and technical assistance provided by other U.S. agencies to develop or enhance the capacity of a selected group of more than two dozen countries whose financial sectors have been used to finance terrorism. Also manage or provide funding for other anti-money laundering or counter-terrorism financing programs for Department of State, other U.S. agencies, IlEAs, international entities, and regional bodies. Leads U.S. interagency assessments of foreign government vulnerabilities. Provides law enforcement training for foreign counterparts and through ILEAs to develop the skills necessary to investigate financial crimes. Provides legal technical assistance to foreign governments by drafting legislation that criminalizes terrorist financing. Provides resident advisors to provide technical assistance to judicial officials in their home country. Provides financial intelligence training and technical assistance to a broad range of government officials, financial regulators, law enforcement officers, and others abroad with a focus on the creation and improvement of financial intelligence units. FinCEN’s IT personnel provide FIU technical assistance in two primary areas: analysis and development of network infrastructures and access to a secure web network for information sharing. Conducts personnel exchanges and conferences. Partners with other governments and international entities to coordinate training. Participates in assessments of foreign governments’ financial intelligence capabilities. Provides law enforcement training and technical assistance to foreign governments and through ILEAs to develop the skills necessary to investigate financial crimes. Provides financial regulatory training in Washington, D.C., and abroad for foreign banking supervisors. Office of Technical Assistance Provides a range of training and technical assistance including intermittent and long-term resident advisors to senior-level representatives in various ministries and central banks on a range of areas including financial reforms related to money laundering and terrorist financing. Conducts and participates in assessments of foreign government anti-money laundering regimes for the purpose of developing technical assistance plans. Participates in U.S. interagency assessments of countries’ counter-terrorism financing and anti-money laundering capabilities. Provides technical advice and practical guidance on how the international anti-money laundering and counter- terrorist financing standards should be adopted and implemented. The following are GAO’s comments on the Department of Justices’s letter dated September 29, 2005. 1. Justice expressed concern that the draft report does not recognize the significant role it plays in providing international training and technical assistance in the money laundering and terrorist financing areas. The report acknowledges the roles of multiple agencies, including Justice, in delivering training and technical assistance to vulnerable countries. Under the first objective we broadly describe the U.S. efforts to provide training and technical assistance to vulnerable countries and note that U.S. offices and bureaus, primarily within the departments of the Treasury, Justice, Homeland Security, and State, and the federal financial regulators, provide training and technical assistance to countries requesting assistance through various programs using a variety of methods funded primarily by the State and Treasury. Moreover, appendix IV includes Table 2, which summarizes key U.S. counter-terrorism financing and anti-money laundering training and technical assistance programs for vulnerable countries and lists contributions provided by Justice, as well as other relevant agencies. 2. Justice expressed dismay that the report focuses on the interaction of State and Treasury rather than the accomplishments of the TFWG. While a number of comments suggested including information indicative of the successes of agency efforts to address terrorist financing abroad, much of this information is outside of the scope of this report. However, we have made a number of changes in response to these comments. First, we have added information on the accomplishments of U.S. agencies to the report. Second, we have adjusted our first objective to clarify that we are providing an overview of U.S. agencies’ efforts to address terrorist financing abroad. Third, as we note in other comments, we have adjusted the title of the report to better reflect the focus of our work. 3. Justice notes that the report addresses a narrower issue than the title implies. We agree. We have revised the title of the report to focus on our key recommendation. 4. According to Justice, our report contains a critical flaw because it does not recognize Justice as a key player nor does it place Justice on equal standing with State and Treasury in the report’s recommendation and Memorandum of Agreement concerning training and technical assistance. Justice noted that it should be involved in all interagency deliberations and decisions. The report acknowledges the roles of multiple important agencies, including Justice, in delivering training and technical assistance to vulnerable countries. The report recommends that the Secretaries of State and the Treasury, develop and implement an integrated strategic plan in consultation with the NSC and relevant government agencies, of which Justice is one (see comment 1). We continue to believe that the Memorandum of Agreement should be limited to the Secretaries of State and Treasury. State and Treasury both primarily fund and support U.S. government anti-money laundering and counter-terrorist financing training and technical assistance programs, and in Treasury’s case also provides considerable training and technical assistance abroad through current U.S. government employees and contractors. It is important that their programs and funding be integrated to optimize results. Other agencies are important stakeholders, as they are recipients of this funding and support and should benefit from improved coordination between these two agencies. Justice primarily receives funding from State and, according to Justice, State has been supportive of Justice’s training and technical assistance efforts. 5. Justice states that contrary to the impression conveyed in the draft, it fully respects the “honest broker role” that State plays as the TFWG coordinator. We have added information from Justice to more accurately portray Justice’s support of State as TFWG coordinator in the Highlights page, Results in Brief, and body of the report. Justice provided information in its technical comments that we believe are important to the key findings and recommendations in this report. While we have addressed Justice’s technical comments as appropriate, we have reprinted and addressed specific technical comments below. 1. “The draft Report reflects that “Justice officials confirmed that roles and procedures [of the TFWG] were a matter of dispute.” The context suggests that DOJ [Department of Justice] does not accept the leadership of the State Department. That is not an accurate statement. DOJ strongly agrees that there needs to be a designated coordinator in this TFWG process and supports that role being given to the State Department, which has been an honest broker in the process and DOJ has abided by its procedures. DOJ agrees with the observation that the Treasury Department does not accept the State Department’s leadership or the procedures. . . .” “Justice officials confirmed that roles and procedures were a matter of dispute.” It would be more accurate to replace dispute with disagreement.” GAO response: Justice made these two comments concerning the statement in the draft report that “Justice officials confirmed that roles and procedures were a matter of dispute.” We added language to show that Justice is supportive of State’s role as coordinator of TFWG efforts and substituted the word “disagreement” for “dispute” when stating that “Justice officials confirmed that roles and procedures were a matter of disagreement.” 2. “The draft report references that AFMLS stated that “the Department of State’s leadership role is limited to its chairmanship of TFWG…” To be clear, this statement was not made to suggest that the TFWG be limited to priority countries, but rather that differing standards on procedures (particularly with DOJ leadership role in legislative drafting) for priority countries and vulnerable countries creates problems.” GAO response: In response to this point, we removed the report’s reference to AFMLS and noted that Justice officials told us that having procedures and practices for TFWG priority countries that differ from those for other vulnerable countries creates problems. The following are GAO’s comments on the Department of State’s letter dated October 3, 2005. 1. State noted in its comments that it does not believe the report accurately portrays the overall effectiveness and success of the Administration’s counter-terrorism finance programs. While a number of comments suggested including information indicative of the successes of agency efforts to address terrorist financing abroad, much of this information is outside of the scope of this report. However, we have made a number of changes in response to these comments. First, we have added information on the accomplishments of U.S. agencies to the report. For example, we added information on the number of needs assessment missions conducted and the number of countries receiving training and technical assistance. Second, we have adjusted our first objective to clarify that we are providing an overview of U.S. agencies' efforts to address terrorist financing abroad. Third, as we note in other comments, we have adjusted the title of the report to better reflect the focus of our work. 2. State commented that it has an integrated strategic plan which is evidenced through classified NSC Deputies Committee documentation and the Department of States’ Office of the Coordinator for Counterterrorism’s Bureau Performance Plan. We reviewed the NSC Deputies Committee documentation, which includes minutes, an agreement, and conclusions-- all of which serve as the NSC guidance for the TFWG. We also reviewed the performance plan, which includes the Office of the Coordinator for Counterterrorism’s objectives and performance measures for counter-terrorist financing programs and provides some performance indicators, such as the number of assessments and training plans completed. Although some aspects of a strategic plan for delivering training and technical assistance are included in these documents, we do not agree that this guidance and performance plan includes the elements necessary to constitute an integrated strategy for the coordination of the delivery of training and technical assistance abroad. In addition to not having a fully integrated strategy on paper, the NSC guidance lacks clarity, particularly regarding coverage of non-priority countries. The guidance also lacks familiarity and clear buy-in among the pertinent levels of agencies. As a result, the documents did not guide the actions of the agencies in actual practice. 3. State commented that “if the country team, interagency and host government agree on an implementation plan, TFWG determines the necessary funding for State to obligate to each agency with the appropriate expertise.” State added that it carefully monitors and can account for all of the funding Congress has appropriated for training programs coordinated through the TFWG, as provided in a classified report. Our report did not specifically address TFWG-reported obligations and expenditures, as this information focusing on priority countries was classified. Our report focused on the lack of transparency in the overall amount of funds available for all counter- terrorism training and technical assistance programs within State and the Treasury. Because funding is embedded with anti-money laundering and other programs, the U.S. government does not have a clear presentation of the budget resources that State and Treasury allocate for training and technical assistance to counter-terrorist financing as differentiated from other programs. Although various officials told us that funding for counter-terrorism financing training and technical assistance is insufficient, the lack of a clear presentation of available budget resources makes it difficult for decision-makers to determine the actual amount that may be allocated to these efforts. 4. We do not agree with State’s comment that TFWG has been very diligent in developing methods to measure its success. As of July 2005, the U.S. government, including TFWG, did not have a system in place to measure the results of its efforts to deliver training and technical assistance and to incorporate this information into integrated planning efforts. Our report acknowledges that an interagency committee was set up to develop a system to measure results and other efforts were undertaken to track training and technical assistance; however, according to agency officials, these efforts have not yet resulted in performance measures. 5. Based on our review of NSC and other documents provided by State, the U.S. government lacks an integrated strategy to coordinate the delivery of training and technical assistance. The classified document serving as an interagency agreement lacks clarity as well as familiarity and buy-in from all agencies and levels of leadership within TFWG, particularly Treasury. The NSC guidance was agreed to at the deputy level, and we found that many of the working level staff were not familiar with the guidance or the interpretation of the guidance and Treasury staff clearly did not have the same interpretation as State staff. 6. State noted that there are established methods to resolve disputes that arise through the interagency process and it is rare that the TFWG process cannot resolve issues. While there are guidelines for resolving disputes, in practice there are long-standing disagreements that have not been resolved. Based on discussions with agency officials and review of documentation, our report provides examples of long- standing disagreements that have not been resolved such as the use of contractors and procedures for conducting assessments of country’s needs for training and technical assistance. 7. State commented that it is the primary responsibility of the TFWG to coordinate all training and technical assistance and notes the existence of formal supporting documents. State commented that while it does not believe additional formal documents are necessary, if a Memorandum of Agreement concerning counter-terrorism financing and anti-money laundering training and technical assistance were to be developed, State commented that it should include all agencies involved in providing training and technical assistance. Our review as well as Treasury’s technical comments clearly shows that Treasury does not accept State’s position that TFWG’s primary responsibility is to coordinate all counter-terrorist financing training and technical assistance abroad. Treasury limits this role to priority countries. Based on our review of NSC and other documents provided by State, the U.S. government lacks an integrated strategy to coordinate the delivery of training and technical assistance. The classified document, which according to State serves as an interagency agreement, lacks clarity, familiarity, and buy-in from all levels of leadership within TFWG, particularly Treasury. State and Treasury both fund and support U.S. government anti-money laundering and counter-terrorist financing training and technical assistance programs, and Treasury also provides considerable training and technical assistance abroad through contractors and U.S. government employees. It is important that their programs and funding are integrated to optimize results. Other agencies are important stakeholders as they are recipients of this funding and support and would benefit from improved coordination between these two agencies. 8. State comments that our recommendation to the Secretary of the Treasury regarding Treasury’s annual Terrorist Assets Report to Congress was incomplete because it makes no mention of State’s role in blocking assets. Specifically we recommend that Treasury provide more complete information on the nature and extent of asset blocking in the United States in its annual Terrorist Assets Report to Congress. We did not incorporate the Secretary of State into this recommendation because the scope of our request for our third objective focused solely on the accountability issues Treasury faces in its efforts to block terrorists’ assets. State also expressed disappointment that our report did not include details on State’s role in terrorist designations. While our report provides an overview of how U.S. government agencies use designations to disrupt terrorist networks, we recognize that State has an important role and added language to provide more detail on State’s role in targeting individuals, groups, or other entities suspected of terrorism or terrorist financing. 9. In response to agency comments, we have revised the title of the report to focus on our key recommendation. 10. The scope of our second objective was to examine U.S. efforts to coordinate the delivery of training and technical assistance to vulnerable countries. The effort does not have key stakeholder buy-in on roles and practices, a strategic alignment of resources with needs, or a system to measure performance and incorporate this information into its planning efforts. According to agency officials, the lack of effective leadership leads to less than optimal delivery of training and technical assistance to vulnerable countries. Without a system to measure performance, the U.S. government and TFWG cannot ensure that its efforts are on track. 11. Although this report is based on unclassified information, GAO reviewed all unclassified and classified information provided by State in support of TFWG efforts. We believe that findings, conclusions, and recommendations accurately portray the interagency process. Moreover, we reviewed and incorporated additional information provided by State subsequent to issuing our draft to the agencies for comment to ensure that all available information was assessed. The following are GAO’s comments on the Department of the Treasury’s letter dated October 5, 2005. 1. Treasury notes in its comments that the report falls short in describing the comprehensive efforts of the U.S. government efforts to combat terrorist financing abroad. While a number of comments suggested including information indicative of the successes of agency efforts to address terrorist financing abroad, much of this information is outside of the scope of this report. However, we have made a number of changes in response to these comments. First, we have added information on the accomplishments of U.S. agencies to the report. For example, we added that Treasury has coordinated bilateral and international technical assistance with the FATF and the international financial institutions, such as the World Bank and International Monetary Fund, to draft legal frameworks, build necessary regulatory and institutional systems, and develop human expertise. Second, we have adjusted our first objective to clarify that we are providing an overview of U.S. agencies' efforts to address terrorist financing abroad. Third, as we note in other comments, we have adjusted the title of the report to better reflect the focus of our work. 2. Treasury suggests that the title of the draft report be modified to be consistent with the primary focus of the report. We agree and have revised the title of the report to focus on the key recommendations. 3. Treasury states that the report does not accurately characterize Treasury’s role in managing the U.S. government’s relationship with international financial institutions. We recognize that Treasury plays an important role and added more examples of Treasury’s relationship with international financial institutions as provided in Treasury’s technical comments. For example, we added Treasury’s relationship with an intergovernmental body --the Financial Action Task Force-- in setting international standards for anti-money laundering and counter- terrorism financing regimes. In addition, we added mentions of Treasury’s relationship with the Asian Development Bank, IMF and the World Bank. 4. Treasury comments that the report focuses on the difficulties and differences arising from the interagency process to coordinate training and technical assistance to combat terrorist financing abroad and fails to give due credit for the successes that have been achieved through unprecedented interagency coordination. Our report concludes that the U.S. government lacks an integrated strategy to coordinate the delivery of training and technical assistance because key stakeholders do not agree on roles and practices, there is not a clear presentation of what funding is available for counter-terrorism financing training and technical assistance, and a system has not been established to measure performance and incorporate this information into its planning efforts. Our report notes that, according to agency officials, the lack of effective leadership leads to less than optimal delivery of training and technical assistance to vulnerable countries. However, we have included some interagency accomplishments such as numbers of assessments in our description of training and technical assistance efforts under objective 1. To best provide evidence of the effectiveness of the U.S. government efforts, the U.S. government should continue to develop a system to measure performance and incorporate this information into its planning efforts. 5. In its comments, Treasury states that the report’s third objective on accountability issues appears somewhat incongruous in a report dedicated to U.S. counter-terrorism training and technical assistance. Our requesters asked us to address specific issues related to U.S. efforts to combat terrorist financing abroad, including accountability issues Treasury faces in its efforts to block terrorists’ assets held under U.S. jurisdiction, particularly with regard to the Treasury’s annual Terrorist Assets Reports. 6. We reviewed the revised text provided by Treasury for our report’s third objective on accountability issues the Department faces in its efforts to block terrorists’ assets held under U.S. jurisdiction. We noted that we already cover many of Treasury’s points in our report. However, in some cases we incorporated technical information to help clarify the challenges the department faces in assessing the impact of terrorist designation activities. In addition, we updated the report to reflect the most current status of Treasury’s efforts to establish performance measures for OFAC. Additionally, we acknowledge that the language in the mandate for the Terrorist Assets Reports did not explicitly design the reports as an accountability measure of the Office of Foreign Assets Control’s effectiveness in identifying and blocking terrorist assets; however, nothing in the statutory language or in the congressional intent underlying the mandate precludes Treasury from compiling and reporting information in the manner in which we have suggested in this report. Furthermore, we believe that inclusion of comparative information and additional explanation regarding significant shifts between years will enhance program reporting and congressional oversight. “The second paragraph of this section states, “First, although the Department of State asserts that it leads the overall effort to deliver training and technical assistance to all vulnerable countries, the Department of Treasury does not accept State in this role.” This statement should be clarified to reflect that while Treasury does acknowledge State’s role, it believes that State’s function is necessarily one of coordination. State’s role in this process is not to actually “deliver” assistance. Rather, Treasury believes that State’s role is coordinating each USG agency’s personnel and expertise to allow them to deliver the needed training in commonly agreed upon priority countries. Treasury also acknowledges that the draft report is helpful in pointing out that this coordination can and should be improved to facilitate more effective delivery of assistance in priority countries.” “The first paragraph contains the following statement “According to the Department of State, its Office of the Coordinator for Counterterrorism is charged with directing, managing, and coordinating all U.S. government agencies’ efforts to develop and provide counter-terrorism financing programs.” This statement is inaccurately overbroad, as Treasury (and likely other government agencies) has developed numerous counterterrorist financing programs to advance the core strategic aims identified in the 2003 NMLS [National Money Laundering Strategy]. It is more accurate to say that the department of State coordinates the USG provision of CFT technical assistance and training to priority countries.” “Substitute with the following language: ‘However, the TAR was not mandated or designed as an accountability measure for OFAC’s effectiveness in assisting U.S. persons in identifying and blocking assets of persons designated under relevant Executive orders relating to terrorism. The report, as mandated, was intended to provide only a snapshot view in time of terrorist assets held in the United States by terrorist countries and organizations.’” “Substitute with the following language: ‘OFAC officials have advised that OFAC’s new performance measures are expected to be completed by December 1, 2005, and its new strategic plan is expected to be completed by January 1, 2006.’” “In the second paragraph, the following language: “We also recommend that the Secretary of Treasury provide more complete information on the nature and extent of asset blocking in the United States in its Terrorist Assets Report to Congress and establish milestones for developing meaningful performance measures on terrorist designations and asset blocking activities…..” Should be replaced with the following language: . . . .“We also recommend Congress consider discontinuing the requirement that Treasury produce the annual Terrorist Assets Report to Congress. The report, based upon the input of numerous government agencies, provides a snapshot of the known assets held in the United States by terrorist-supporting countries and terrorist groups at a given point in time. These numbers may fluctuate during each year and between years for a number of policy-permissible reasons. The amount of assets blocked under a terrorism sanctions program is not a primary measure of a terrorism sanctions program’s effectiveness, and countries that have been declared terrorist supporting, and whose assets are not blocked by a sanctions program, are already wary of holding assets in the United States.’” GAO response: We noted Treasury’s position on this recommendation in our report. However, we continue to believe that the annual Terrorist Assets Report, with the incorporated changes, would be useful to policymakers and program managers in examining their overall achievements of the U.S. efforts to block terrorists’ assets. In addition to the contact named above, Christine Broderick, Assistant Director; Tracy Guerrero; Elizabeth Guran; Janet Lewis; and Kathleen Monahan made key contributions to this report. Martin de Alteriis, Mark Dowling, Jamie McDonald, and Michael Rohrback provided technical assistance. | Terrorist groups need significant amounts of money to organize, recruit, train, and equip adherents. U.S. disruption of terrorist financing can raise the costs and risks and impede their success. This report (1) provides an overview of U.S. government efforts to combat terrorist financing abroad and (2) examines U.S. government efforts to coordinate training and technical assistance. We also examined specific accountability issues the Department of the Treasury faces in its efforts to block terrorists' assets held under U.S. jurisdiction. U.S. efforts to combat terrorist financing abroad include a number of interdependent activities--terrorist designations, intelligence and law enforcement, standard setting, and training and technical assistance. First, the U.S. government designates terrorists and blocks their assets and financial transactions and supports similar efforts of other countries. Second, intelligence and law enforcement efforts include operations, investigations, and exchanging information and evidence with foreign counterparts. Third, U.S. agencies work through the United Nations and the Financial Action Task Force on Money Laundering to help set international standards to counter terrorist financing. Fourth, the U.S. government provides training and technical assistance directly to vulnerable countries and works with its allies to leverage resources. The U.S. government lacks an integrated strategy to coordinate the delivery of counter-terrorism financing training and technical assistance to countries vulnerable to terrorist financing. Specifically, the effort does not have key stakeholder acceptance of roles and procedures, a strategic alignment of resources with needs, or a process to measure performance. First, the Department of Treasury does not accept the Department of State leadership or the State-led Terrorist Financing Working Group's (TFWG) procedures for the delivery of training and technical assistance abroad. While supportive of the Department of State's role as coordinator of TFWG efforts, the Department of Justice officials confirmed that roles and procedures were a matter of disagreement. Second, the U.S. government does not have a clear presentation and objective assessment of its resources and has not strategically aligned them with its needs for counter-terrorist financing training and technical assistance. Third, the U.S. government, including TFWG, lacks a system for measuring performance and incorporating results into its planning efforts. The Treasury faces two accountability issues related to its terrorist asset blocking efforts. First, Treasury's Office of Foreign Assets Control (OFAC) reports on the nature and extent of terrorists' U.S. assets do not provide Congress the ability to assess OFAC's achievements. Second, Treasury lacks meaningful performance measures to assess its terrorist designation and asset blocking efforts. OFAC is in the process of developing more meaningful performance measures aided by its early efforts to develop an OFAC-specific strategic plan. Officials stated that OFAC's new performance measures will be completed by December 1, 2005, and its strategic plan will be completed by January 1, 2006; however, they did not provide us with documentation of milestones or completion dates. |
GAO is a key source of professional and objective information and analysis and, as such, plays a crucial role in supporting congressional decision making. For example, in fiscal year 2003, as in other years, the challenges that most urgently engaged the attention of the Congress helped define our priorities. Our work on issues such as the nation’s ongoing battle against terrorism, Social Security and Medicare reform, the implementation of major education legislation, human capital transformations at selected federal agencies, and the security of key government information systems all helped congressional members and their staffs to develop new federal policies and programs and oversee ongoing ones. Moreover, the Congress and the executive agencies took a wide range of actions in fiscal year 2003 to improve government operations, reduce costs, or better target budget authority based on GAO’s analyses and recommendations. In fiscal year 2003, GAO served the Congress and the American people by helping to identify steps to reduce improper payments and credit card fraud in government programs; restructure government and improve its processes and systems to maximize homeland security; prepare the financial markets to continue operations if terrorism update and strengthen government auditing standards; improve the administration of Medicare as it undergoes reform; encourage and help guide federal agency transformations; contribute to congressional oversight of the federal income tax system; identify human capital reforms needed at the Department of Defense, the Department of Homeland Security, and other federal agencies; raise the visibility of long-term financial commitments and imbalances in the federal budget; reduce security risks to information systems supporting the nation’s critical infrastructures; oversee programs to protect the health and safety of today’s workers; ensure the accountability of federal agencies through audits and serve as a model for other federal agencies by modernizing our approaches to managing and compensating our people. To ensure that we are well positioned to meet the Congress’s future needs, we update our 6-year strategic plan every 2 years, consulting extensively during the update with our clients in the Congress and with other experts (see app. I for our strategic plan framework). The following table summarizes selected performance measures and targets for fiscal years 1999 through 2005. Highlights of our fiscal year 2003 accomplishments and their impact on the American public are shown in the following sections. Many of the benefits produced by our work can be quantified as dollar savings for the federal government (financial benefits), while others cannot (other benefits). Both types of benefits resulted from our efforts to provide information to the Congress that helped (1) improve services to the public, (2) provide information that resulted in statutory or regulatory changes, and (3) improve core business processes and advance governmentwide management reforms. In fiscal year 2003, our work generated $35.4 billion in financial benefits— a $78 return on every dollar appropriated to GAO. The funds made available in response to our work may be used to reduce government expenditures or reallocated by the Congress to other priority areas. Nine accomplishments accounted for nearly $27.4 billion, or 77 percent, of our total financial benefits for fiscal year 2003. Six of these accomplishments totaled $25.1 billion. Table 2 lists selected major financial benefits in fiscal year 2003 and describes the work contributing to financial benefits over $500 million. Many of the benefits that flow to the American people from our work cannot be measured in dollar terms. During fiscal year 2003, we recorded a total of 1,043 other benefits—up from 607 in fiscal year 1999. As shown in appendix II, we documented instances where information we provided to the Congress resulted in statutory or regulatory changes, where federal agencies improved services to the public and where agencies improved core business processes or governmentwide reforms were advanced. These actions spanned the full spectrum of national issues, from securing information technology systems to improving the performance of state child welfare agencies. We helped improve services to the public by Strengthening the U.S. visa process as an antiterrorism tool. Our analysis of the U.S. visa-issuing process showed that the Department of State’s visa operations were more focused on preventing illegal immigrants from obtaining nonimmigrant visas than on detecting potential terrorists. We recommended that State reassess its policies, consular staffing procedures, and training program. State has taken steps to adjust its policies and regulations concerning the screening of visa applicants and its staffing and training for consular officers. Enhancing quality of care in nursing homes. In a series of reports and testimonies since 1998, we found that, too often, residents of nursing homes were being harmed and that programs to oversee nursing home quality of care at the Centers for Medicare and Medicaid Services were not fully effective in identifying and reducing such problems. In 2003, we found a decline in the proportion of nursing homes that harmed residents but made additional recommendations to further improve care. Making key contributions to homeland security. Drawing on an extensive body of completed and ongoing work, we identified specific vulnerabilities and areas for improvement to protect aviation and surface transportation, chemical facilities, sea and land ports, financial markets, and radioactive sealed sources. In response to our recommendations, the Congress and cognizant agencies have undertaken specific steps to improve infrastructure security and improve the assessment of vulnerabilities. Improving compliance with seafood safety regulations. We reported that when Food and Drug Administration (FDA) inspectors identified serious violations at seafood processing firms, it took FDA 73 days on average, well above its 15-day target. Based on our recommendations, FDA now issues warning letters in about 20 days. We helped to change laws in the following ways: We highlighted the National Smallpox Vaccination program volunteers’ concerns about losing income if they sustained injuries from an inoculation. As a result, the Smallpox Emergency Personnel Protection Act of 2003 (Pub. L. No. 108-20) provides benefits and other compensation to covered individuals injured in this way. We performed analyses that culminated in the enactment of the Postal Civil Service Retirement System Funding Reform Act of 2003 (Pub. L. No. 108-18), which reduced USPS’s pension costs by an average of $3 billion per year over the next 5 years. The Congress directed that the first 3 years of savings be used to reduce USPS’s debt and hold postage rates steady until fiscal 2006. We also helped to promote sound agency and governmentwide management by Encouraging and helping guide agency transformations. We highlighted federal entities whose missions and ways of doing business require modernized approaches, including the Postal Service and the Coast Guard. Among congressional actions taken to deal with modernization issues, the House Committee on Government Reform established a special panel on postal reform and oversight to work with the President’s Commission on the Postal Service on recommendations for comprehensive postal reform. Our recommendations to the Coast Guard led to better reporting by the Coast Guard and laid the foundation for key revisions the agency intended to make to its strategic plan. Helping to advance major information technology modernizations. Our work has helped to strengthen the management of the complex multibillion-dollar information technology modernization program at the Internal Revenue Service (IRS) to improve operations, promote better service, and reduce costs. For example, IRS implemented several of our recommendations to improve software acquisition, enterprise architecture definition and implementation, and risk management and to better balance the pace and scope of the program with IRS’s capacity to effectively manage it. Supporting controls over DOD’s credit cards. In a series of reports and testimonies beginning in 2001, we highlighted pervasive weaknesses in DOD’s overall credit card control environment, including the proliferation of credit cards and the lack of specific controls over its multibillion-dollar purchase and travel card programs. DOD has taken many actions to reduce its vulnerabilities in this area. While our primary focus is on improving government operations at the federal level, sometimes our work has an impact at the state and local levels. To the extent feasible, in conducting our audits and evaluations, we cooperate with state and local officials. At times, our work results will have local applications, and local officials will take advantage of our efforts. We are conducting a pilot to determine the feasibility of measuring the impact of our work on state and local governments. The following are examples we have collected during our pilot where our work is relevant for state and local government operations: Identity theft. Effective October 30, 1998, the Congress enacted the “Identity Theft and Assumption Deterrence Act of 1998” prohibiting the unlawful use of personal identifying information, such as names, Social Security numbers, and credit card numbers. GAO report GGD-98-100BR is mentioned prominently in the act’s legislative history. Subsequently, a majority of states have enacted identity theft laws. Sponsors of some of these state enactments—Alaska, Florida, Illinois, Michigan, Pennsylvania, and Texas—mentioned the federal law and/or our report. For example, in 1999, Texas enacted SB 46, which is modeled after the federal law. Justice officials said that enactment of state identity theft laws has multijurisdictional benefits to all levels of law enforcement— federal, state, and local. Pipeline safety. Our report GAO-RCED-00-128, Pipeline Safety: The Office of Pipeline Safety Is Changing How It Oversees the Pipeline Industry, found that the Department of Transportation’s Office of Pipeline Safety was reducing its reliance on states to help oversee the safety of interstate pipelines. The report stated that allowing states to participate in this oversight could improve pipeline safety. As a result, the Office of Pipeline Safety modified its Interstate Pipeline Oversight Program for 2001-2002 to allow greater opportunities for state participation. Temporary Assistance for Needy Families Grant Program. We reported on key national and state labor market statistics and changes in the levels of cash assistance and employment activities in five selected states. We also highlighted the fact that the five states had faced severe fiscal challenges and had used reserve funds to augment their spending above the amount of their annual Temporary Assistance for Needy Families block grant from the federal government. Issued to coincide with the start of each new Congress, our high-risk update lists government programs and functions in need of special attention or transformation to ensure that the federal government functions in the most economical, efficient, and effective manner possible. This is especially important in light of the nation’s large and growing long- term fiscal imbalance. Our latest report, released in January 2003, spotlights more than 20 troubled areas across government. Many of these areas involve essential government services, such as Medicare, housing programs, and postal service operations that directly affect the lives and well-being of the American people. Our high-risk program, which we began in 1990, includes five high-risk areas added in 2003: implementing and transforming the new Department of Homeland Security, modernizing federal disability programs, Pension Benefit Guaranty Corporation’s (PBGC) single-employer pension insurance program. In fiscal year 2003, we also removed the high-risk designation from two programs: the Social Security Administration’s Supplemental Security Income program, and Asset Forfeiture programs administered by the U.S. Departments of Justice and the Treasury. In fiscal 2003, we issued 208 reports and delivered 112 testimonies related to high-risk areas, and our related work resulted in financial benefits totaling almost $21 billion. Our sustained focus on high-risk problems also has helped the Congress enact a series of governmentwide reforms to strengthen financial management, improve information technology, and create a more results-oriented and accountable federal government. The President’s Management Agenda for reforming the federal government mirrors many of the management challenges and program risks that we have reported on in our performance and accountability series and high- risk updates, including a governmentwide initiative to focus on strategic management of human capital. Following GAO’s designation of federal real property as a high-risk issue, the Office of Management and Budget (OMB) has indicated its plans to add federal real property as a new program initiative under the President’s Management Agenda. OMB recently issued an executive order on federal real property that addresses many of GAO’s concerns, including the need to better emphasize the importance of government property to effective management. We have an ongoing dialog with OMB regarding the high-risk areas, and OMB is working with agency officials to address many of our high-risk areas. Some of these high-risk areas may require additional authorizing legislation as one element of addressing the problems. Our fiscal year 2003 high-risk list is shown in table 3. During fiscal year 2003 GAO executives testified at 189 congressional hearings—sometimes with very short notice—covering a wide range of complex issues. Testimony is one of our most important forms of communication with the Congress; the number of hearings at which we testify reflects, in part, the importance and value of our expertise and experience in various program areas and our assistance with congressional decision making. The following figure highlights, by GAO’s three external strategic goals for serving the Congress, examples of issues on which we testified during fiscal year 2003. While the vast majority of our products—97 percent—were completed on time for our congressional clients and customers in fiscal year 2003, we slightly missed our target of providing 98 percent of them on the promised day. We track the percentage of our products that are delivered on the day we agreed to with our clients because it is critical that our work be done on time for it to be used by policymakers. Though our 97 percent timeliness rate was a percentage point improvement over our fiscal year 2002 result, it was still a percentage point below our goal. As a result, we are taking steps to improve our performance in the future by encouraging matrix management practices among the teams supporting various strategic goals and identifying early those teams that need additional resources to ensure the timely delivery of their products to our clients. The results of our work were possible, in part, because of the changes we have made to maximize the value of GAO. With the Congress’s support, we have demonstrated that becoming world class does not require substantial staffing increases, but rather maximizing the efficient and effective use of the resources available to us. Since I came to GAO, we have developed a strategic plan, realigned our organizational structure and resources, and increased our outreach and service to our congressional clients. We have developed and revised a set of congressional protocols, developed agency and international protocols, and better refined our strategic and annual planning and reporting processes. We have worked with you to make changes in areas where we were facing longer-term challenges when I came to GAO, such as in the critical human capital, information technology, and physical security areas. We are grateful to the Congress for supporting our efforts through pending legislation that, if passed, would give us additional human capital flexibilities that will allow us, among other things, to move to an even more performance-based compensation system and help to better position GAO for the future. As part of our ongoing effort to ensure the quality of our work, this year a team of international auditors will perform a peer review of GAO’s performance audit work issued in calendar year 2004. We continued our policy of proactive outreach to our congressional clients, the press, and the public to enhance the visibility of our products. On a daily basis we compile and publish a list of our current reports. This feature has more than 18,000 subscribers, up 3,000 from last year. We also produced an update of our video on GAO, “Impact 2003.” Our external Web site continues to grow in popularity, having increased the number of hits in fiscal year 2003 to an average of 3.4 million per month, 1 million more per month than in fiscal year 2002. In addition, visitors to the site are downloading an average of 1.1 million files per month. As a result, demand for printed copies of our reports has dramatically declined, allowing us to phase out our internal printing capability. For the 17th consecutive year, GAO’s financial statements have received an unqualified opinion from our independent auditors. We prepared our financial statements for fiscal year 2003 and the audit was completed a month earlier than last year and a year ahead of the accelerated schedule mandated by OMB. For a second year in a row, the Association of Government Accountants awarded us a certificate of excellence; this year the award was for the fiscal year 2002 annual performance and accountability report. Given our role as a key provider of information and analyses to the Congress, maintaining the right mix of technical knowledge and expertise as well as general analytical skills is vital to achieving our mission. Because we spend about 80 percent of our resources on our people, we need excellent human capital management to meet the expectations of the Congress and the nation. Accordingly, in the past few years, we have expanded our college recruiting and hiring program and focused our overall hiring efforts on selected skill needs identified during our workforce planning effort and to meet succession planning needs. For example, we identified and reached prospective graduates with the required skill sets and focused our intern program on attracting those students with the skill sets needed for our analyst positions. Our efforts in this area were recognized by Washingtonian magazine, which listed GAO as one of the “Great Places to Work” in its November 2003 issue. Continuing our efforts to promote the retention of staff with critical skills, we offered qualifying employees in their early years at GAO student loan repayments in exchange for their signed agreements to continue working at GAO for 3 years. We also have begun to better link compensation, performance, and results. In fiscal year 2002 and 2003, we implemented a new performance appraisal system for our analyst, attorney, and specialist staff that links performance to established competencies and results. We evaluated this system in fiscal year 2003 and identified and implemented several improvements, including conducting mandatory training for staff and managers on how to better understand and apply the performance standards, and determining appropriate compensation. We will implement a new competency based appraisal system, pay banding and a pay for performance system for our administrative professional and support services staff this fiscal year. To train our staff to meet the new competencies, we developed an outline for a new competency-based and role- and task-driven learning and development curriculum that identified needed core and elective courses and other learning resources. We also completed several key steps to improve the structure of our learning organization, including hiring a Chief Learning Officer and establishing a GAO Learning Board to guide our learning policy, to set specific learning priorities, and to oversee the implementation of a new training and development curriculum. We also drafted our first formal and comprehensive strategic plan for human capital to communicate both internally and externally our strategy for enhancing our standing as a model professional services organization, including how we plan to attract, retain, motivate, and reward a high- performing and top-quality workforce. We expect to publish the final plan this fiscal year. Our Employee Advisory Council is now a fully democratically elected body that advises GAO’s senior executives on matters of interest to our staff. We also established a Human Capital Partnership Board to gather opinions of a cross section of our employees about upcoming initiatives and ongoing programs. The 15-member board will assist our Human Capital Office in hearing and understanding the perspectives of its customers—our staff. In addition, we will continue efforts to be ready to implement the new human capital authorities included in legislation currently pending before the Senate. This legislation, if passed, would give us more flexibility to deal with mandatory pay and related costs during tight budgetary times. Our resourceful management of information technology was recognized when we were named one of the “CIO (Chief Information Officer) 100” by CIO Magazine, recognizing excellence in managing our information technology (IT) resources through “creativity combined with a commitment to wring the most value from every IT dollar.” We were one of three federal agencies named, selected from over 400 applicants, largely representing private sector firms. In particular, we were cited for excellence in asset management, staffing and sourcing, and building partnerships, and for implementing a “best practice”—staffing new projects through internal “help wanted” ads. We have expanded and enhanced the IT Enterprise Architecture program we began in fiscal year 2002. We formally established an Enterprise Architecture oversight group and steering committee to prioritize our IT business needs, provide strategic direction, and ensure linkage between our IT Enterprise Architecture and our capital investment process. We implemented a number of user friendly Web-based systems to improve our ability to obtain feedback from our congressional clients, facilitate access to our information for the external customer, and enhance productivity for the internal customer. Among the new and enhanced Web-based systems were an application to track and access General Counsel work by goal, team, a Web site on emerging trends and issues to provide information for our teams and offices as they consult with the Congress; and an automated tracking application for our staff to monitor the status of products to be published. In addition, we developed and released a system to automate an existing data collection and analysis process, greatly expanding our annual capacity to review DOD weapons systems programs. As a result, we were able to increase staff productivity and efficiency and enhance the information and services provided to the Congress. In the past, we were able to complete a review annually of eight DOD weapons systems programs. In fiscal year 2003 we reviewed 30 programs and reported on 26. Within the next year, that number will grow to 80 per year. We recognize the ongoing, ever present threat to our shared IT systems and information assets and continue to promote awareness of this threat, maintain vigilance, and develop practices that protect information assets, systems, and services. As part of our continuing emergency preparedness plan, we upgraded the level of telecommunications services between our disaster recovery site and headquarters, expanded our remote connectivity capability, and improved our response time and transmission speed. To further protect our data and resources, we drafted an update to our information systems security policy, issued network user policy statements, hardened our internal network security, expanded our intrusion detection capability, and addressed concerns raised during the most recent network vulnerability assessment. We plan to continue initiatives to ensure a secure environment, detect intruders in our systems, and recover in the event of a disaster. We are also continuing to make the investments necessary to enhance the safety and security of our staff, facilities, and other assets for the mutual benefit of GAO and the Congress. In addition, we plan to continue initiatives designed to further increase employees’ productivity, facilitate knowledge sharing, and maximize the use of technology through tools available at the desktop and by reengineering the systems that support our business processes. On the basis of recommendations resulting from our physical security evaluation and threat assessment, we continue to implement initiatives to improve the security and safety of our building and personnel. In terms of the physical plant improvements, we upgraded the headquarters fire alarm system and installed a parallel emergency notification system. We completed a study of personal protective equipment, and based on the resulting decision paper, we have distributed escape hoods to GAO staff. We have also made a concerted effort to secure the perimeter and access to our building. Several security enhancements will be installed in fiscal year 2004, such as vehicle restraints at the garage ramps; ballistic-rated security guard booths; vehicle surveillance equipment at the garage entrances; and state-of-the-art electronic security comprising intrusion detection, access control, and closed-circuit surveillance systems. A team of international auditors, led by the Office of the Auditor General of Canada, will conduct a peer review for calendar year 2004 of our performance audit work. This entails reviewing our policies and internal controls to assess the compliance of GAO’s work with government audit standards. The review team will provide GAO with management suggestions to improve our quality control systems and procedures. Peer reviews will be conducted every 3 years. GAO is requesting budget authority of $486 million for fiscal year 2005. The requested funding level will allow us to maintain our base authorized level of 3,269 full-time equivalent (FTE) staff to serve the Congress, maintain operational support at fiscal year 2004 levels, and continue efforts to enhance our business processes and systems. This fiscal year 2005 budget request represents a modest increase of 4.9 percent over our fiscal year 2004 projected operating level, primarily to fund mandatory pay and related costs and estimated inflationary increases. The requested increase reflects an offset of almost $5 million from nonrecurring fiscal year 2004 initiatives, including closure of our internal print plant, and $1 million in anticipated reimbursements from a planned audit of the Securities and Exchange Commission’s (SEC) financial statements. Our requested fiscal year 2005 budget authority includes about $480 million in direct appropriations and authority to use $6 million in estimated revenue from reimbursable audit work and rental income. To achieve our strategic goals and objectives for serving the Congress, we must ensure that we have the appropriate human capital, fiscal, and other resources to carry out our responsibilities. Our fiscal year 2005 request would enable us to sustain needed investments to maximize the productivity of our workforce and to continue addressing key management challenges: human capital, and information and physical security. We will continue to take steps to “lead by example” within the federal government in these and other critical management areas. If the Congress wishes for GAO to conduct technology assessments, we are also requesting $545,000 to obtain four additional FTEs and contract assistance and expertise to establish a baseline technology assessment capability. This funding level would allow us to conduct one assessment annually and avoid an adverse impact on other high priority congressional work. We are grateful to the Congress for providing support and resources that have helped us in our quest to be a world class professional services organization. The funding we received in fiscal year 2004 is allowing us to conduct work that addressed many difficult issues confronting the nation. By providing professional, objective, and nonpartisan information and analyses, we help inform the Congress and executive branch agencies on key issues, and covered programs that continue to involve billions of dollars and touch millions of lives. I am proud of the outstanding contributions made by GAO employees as they work to serve the Congress and the American people. In keeping with my strong belief that the federal government needs to exercise fiscal discipline, our budget request for fiscal year 2005 is modest, but would maintain our ability to provide first class, effective, and efficient support to the Congress and the nation to meet 21st century challenges in these critical times. This concludes my statement. I would be pleased to answer any questions the Members of the Subcommittee may have. GAO Efforts That Helped to Change Laws and/or Regulations Consolidated Appropriations Resolution, 2003, Public Law 108-7. The law includes GAO’s recommended language that the administration’s competitive sourcing targets be based on considered research and sound analysis. Smallpox Emergency Personnel Protection Act of 2003, Public Law 108-20. GAO’s report on the National Smallpox Vaccination program highlighted volunteers’ concerns about losing income if they sustained injuries from an inoculation. This statute provides benefits and other compensation to covered individuals injured in this way. Postal Civil Service Retirement System Funding Reform Act of 2003, Public Law 108-18. Analyses performed by GAO and OPM culminated in the enactment of this law that reduces USPS’s pension costs by an average of $3 billion per year over the next 5 years. The Congress directed that the first 3 years of savings be used to reduce USPS’s debt and hold postage rates steady until fiscal 2006. Accountability of Tax Dollars Act of 2002, Public Law 107-289. A GAO survey of selected non-CFO Act agencies demonstrated the significance of audited financial statements in that community. GAO provided legislative language that requires 70 additional executive branch agencies to prepare and submit audited annual financial statements. Emergency Wartime Supplemental Appropriations Act, 2003, Public Law 108-11. GAO assisted congressional staff with drafting a provision that made available up to $64 million to the Corporation for National and Community Service to liquidate previously incurred obligations, provided that the Corporation reports overobligations in accordance with the requirements of the Antideficiency Act. Intelligence Authorization Act for Fiscal Year 2003, Public Law 107-306. GAO recommended that the Director of Central Intelligence report annually on foreign entities that may be using U. S. capital markets to finance the proliferation of weapons, including weapons of mass destruction, and this statute instituted a requirement to produce the report. GAO Efforts That Helped to Improve Services to the Public Strengthening the U.S. Visa Process as an Antiterrorism Tool. Our analysis of the U.S. visa-issuing process showed that the Department of State’s visa operations were more focused on preventing illegal immigrants from obtaining nonimmigrant visas than on detecting potential terrorists. We recommended that State reassess its policies, consular staffing procedures, and training program. State has taken steps to adjust its policies and regulations concerning the screening of visa applicants and its staffing and training for consular officers. Enhancing Quality of Care in Nursing Homes. In a series of reports and testimonies since 1998, we found that, too often, residents of nursing homes were being harmed and that programs to oversee nursing home quality of care at the Centers for Medicare and Medicaid Services were not fully effective in identifying and reducing such problems. In 2003, we found a decline in the proportion of nursing homes that harmed residents but made additional recommendations to further improve care. Making Key Contributions to Homeland Security. Drawing upon an extensive body of completed and ongoing work, we identified specific vulnerabilities and areas for improvement to protect aviation and surface transportation, chemical facilities, sea and land ports, financial markets, and radioactive sealed sources. In response to our recommendations, the Congress and cognizant agencies have undertaken specific steps to improve infrastructure security and improve the assessment of vulnerabilities. Improving Compliance with Seafood Safety Regulations. We reported that when Food and Drug Administration (FDA) inspectors identified serious violations at seafood processing firms, it took FDA 73 days on average, well above its 15-day target. Based on our recommendations, FDA now issues warning letters in about 20 days. Strengthening Labor’s Management of the Special Minimum Wage Program. Our review of this program resulted in more accurate measurement of program participation and noncompliance by employees and prevented inappropriate payment of wages below the minimum wage to workers with disabilities. Reducing National Security Risks Related to Sales of Excess DOD Property. We reported that DOD did not have systems and procedures in place to maintain visibility and control over 1.2 million chemical and biological protective suits and certain equipment that could be used to produce crude forms of anthrax. Unused suits (some of which were defective) and equipment were declared excess and sold over the Internet. DOD has taken steps to notify state and local responders who may have purchased defective suits. Also, DOD has taken action to restrict chemical-biological suits to DOD use only—an action that should eliminate the national security risk associated with sales of these sensitive military items. Lastly, DOD has suspended sales of the equipment in question pending the results of a risk assessment. GAO Efforts That Helped to Change Laws and/or Regulations Protecting the Retirement Security of Workers. We alerted the Congress to potential dangers threatening the pensions of millions of American workers and retirees. The pension insurance program’s ability to protect workers’ benefits is increasingly being threatened by long-term, structural weaknesses in the private-defined, pension benefit system. A comprehensive approach is needed to mitigate or eliminate the risks. Improving Mutual Fund Disclosures. To improve investor awareness of mutual fund fees and to increase price competition among funds, we identified alternatives for regulators to increase the usefulness of fee information disclosed to investors. Early in fiscal year 2003, the Securities and Exchange Commission issued proposed rules to enhance mutual fund fee disclosures using one of our recommended alternatives. GAO Efforts That Helped to Promote Sound Agency and Governmentwide Management Encouraging and Helping Guide Agency Transformations. We highlighted federal entities whose missions and ways of doing business require modernized approaches, including the Postal Service, and the Coast Guard. Among congressional actions taken to deal with modernization issues, the House Committee on Government Reform established a special panel on postal reform and oversight to work with the President’s Commission on the Postal Service on recommendations for comprehensive postal reform. We also reported this year on the Coast Guard’s ability to effectively carry out critical elements of its mission, including its homeland security responsibilities. We recommended that the Coast Guard develop a blueprint for targeting its resources to its various mission responsibilities and a better reporting mechanism for informing the Congress on its effectiveness. Our recommendations led to better reporting by the Coast Guard and laid the foundation for key revisions the agency intended to make to its strategic plan. Helping DOD Recognize and Address Business Modernization Challenges. Several times we have reported and testified on the challenges DOD faces in trying to successfully modernize about 2,300 business systems, and we made a series of recommendations aimed at establishing the modernization management capabilities needed to be successful in transforming the department. DOD has implemented some key architecture management capabilities, such as assigning a chief architect and creating a program office, as well as issuing the first version of its business enterprise architecture in May 2003. In addition, DOD has revised its system acquisition guidance. By implementing our recommendations, DOD is increasing the likelihood that its systems investments will support effective and efficient business operations and provide for timely and reliable information for decision making. Helping to Advance Major Information Technology Modernizations. Our work has helped to strengthen the management of the complex, multibillion-dollar information technology modernization program at the Internal Revenue Service (IRS) to improve operations, promote better service, and reduce costs. For example, IRS implemented several of our recommendations to improve software acquisition, enterprise architecture definition and implementation, and risk management and to better balance the pace and scope of the program with its capacity to effectively manage it. Improving Internal Controls and Accountability over Agency Purchases. Our work examining purchasing and property management practices at FAA identified several weaknesses in the specific controls and overall control environment that allowed millions of dollars of improper and wasteful purchases to occur. Such weaknesses also contributed to many instances of property items not being recorded in FAA’s property management system, which allowed hundreds of lost or missing property items to go undetected. Acting on our findings, FAA established key positions to improve management oversight of certain purchasing and monitoring functions, revised its guidance to strengthen areas of weakness and to limit the allowability of certain expenditures, and recorded assets into its property management system that we identified as unrecorded. Strengthening Government Auditing Standards. Our publication of the Government Auditing Standards in June 2003 provides a framework for audits of federal programs and monies. This comes at a time of urgent need for integrity in the auditing profession and for transparency and accountability in the management of scarce resources in the government sector. The new revision of the standards strengthens audit requirements for identifying fraud, illegal acts, and noncompliance, and gives clear guidance to auditors as they contribute to a government that is efficient, effective, and accountable to the people. Supporting Controls over DOD’s Credit Cards. In a series of reports and testimonies beginning in 2001, we highlighted pervasive weaknesses in DOD’s overall credit card control environment, including the proliferation of credit cards and the lack of specific controls over its multibillion dollar purchase and travel card programs. We identified numerous cases of fraud, waste, and abuse and made 174 recommendations to improve DOD’s credit card operations. DOD has taken many actions to reduce its vulnerabilities in this area. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO exists to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. In the years ahead, its support to the Congress will likely prove even more critical because of the pressures created by the nation's large and growing long-term fiscal imbalance, which is driven primarily by known demographic and rising health care trends. These pressures will require the Congress to make tough choices regarding what the government does, how it does business, and who will do the government's business in the future. GAO's work covers virtually every area in which the federal government is or may become involved, anywhere in the world. Perhaps just as importantly, GAO's work sometimes leads it to sound the alarm over problems looming just beyond the horizon--such as the nation's enormous long-term fiscal challenges--and help policymakers address these challenges in a timely and informed manner. The Comptroller General presented testimony that focused on GAO's progress during his first five years in office. He highlighted GAO's (1) fiscal year 2003 performance and results; (2) efforts to maximize its effectiveness, responsiveness, and value; and (3) budget request for fiscal year 2005 to support the Congress and serve the American people. The funding GAO received in fiscal year 2003 allowed it to conduct work that addressed many of the difficult issues confronting the nation, including diverse and diffuse security threats, selected government transformation challenges, and the nation's long-term fiscal imbalance. Its work was also driven by changing demographic trends, which led it to focus on such areas as the quality of care in the nation's nursing homes and the risks to the government's single-employer pension insurance program. Importantly, in fiscal year 2003, GAO generated a $78 return for each $1 appropriated to the agency. With the Congress's support, GAO demonstrated that becoming world class does not require a substantial increase in the number of staff authorized, but rather maximizing the efficient and effective use of the resources available to it. During tight budget times, human capital flexibilities would allow GAO, among other things, more options to deal with mandatory pay and related costs. In keeping with the Comptroller's belief that the federal government needs to exercise a greater degree of fiscal discipline, GAO has kept its request to $486 million, an increase of only 4.9 percent over fiscal year 2004. In keeping with the Congress's intent, GAO is continuing its efforts to revamp its budget presentation to make the linkages between funding and program areas more clear. Hopefully in the future the Congress will be able to use such performance information to make tough choices on funding, thereby enabling it to avoid across-the-board reductions that penalize agencies that exercise fiscal discipline and generate high returns on investment and real results. |
Under the BHC Act, a bank holding company must obtain FRB’s approval before merging with or acquiring another bank holding company. In reviewing an application filed by a bank holding company, FRB is required to consider several factors, including the financial and managerial resources of the applicant, the future prospects of both the applicant and the bank holding company that is to be acquired, the competitive effects of the merger, and the convenience and needs of the community to be served. Even before CRA was enacted, FRB’s regulations called for public comments in connection with the merger applications pursuant to the BHC Act obligation of FRB to ensure that the merger would meet the convenience and needs of the local community. The Board of Governors has the authority to delegate its application authority to the Reserve Banks if the application fits certain criteria. However, an application may raise several issues that might require Board Action under such factors as the financial, managerial, and convenience and needs of the community, including the CRA performance of the applicant. FRB approved the six BHC mergers in our study. With the exception of NBD’s acquisition of First Chicago and Bank One’s acquisition of NBD First Chicago, the lead bank subsidiaries also took actions to merge their operations. Bank subsidiaries are also required to receive approval from their primary regulators for such combinations. For three BHC mergers, the lead bank subsidiaries of the merging BHCs submitted their applications to their primary regulators after FRB approved their BHC applications. CRA requires all federal bank and thrift regulators—FRB, the Office of the Comptroller of the Currency (OCC), the Office of Thrift Supervision (OTS), and the Federal Deposit Insurance Corporation (FDIC)—to encourage depository institutions under their jurisdiction to help meet the credit needs in all areas of the community that the institution is chartered to serve, consistent with safe and sound operations. CRA requires that the appropriate federal supervisory authority (1) assess the institution’s record of meeting the credit needs of its entire community, including LMI areas, and (2) take that record into account in its evaluation of bank expansion applications. Such applications include those to establish or relocate a branch or home office and applications for mergers, consolidations, or the purchase of assets or assumption of liabilities of a regulated financial institution. Assessment areas, also called delineated areas, represent the communities for which the regulators are to assess an institution’s record of CRA performance. CRA also requires the regulators to periodically assess an institution’s community lending performance during examinations. Only insured banks and thrifts are subject to the provisions of CRA. On the basis of the findings of the examination, depository institutions are assigned a rating—that is, outstanding, satisfactory, needs to improve, or substantial noncompliance. Nonbank financial institutions, such as mortgage companies, are not subject to CRA provisions. Unlike certain other banking laws, CRA does not provide regulators with the authority to take enforcement action on the basis of findings of noncompliance resulting from the examination process. The CRA application evaluation process is the exclusive mechanism for enforcing the statute. Regulations proposed in 1993 and 1994 by the regulators included a new set of sanctions to enforce CRA. According to the proposed regulations, a poor CRA rating would have been considered a violation of a bank’s affirmative obligation to meet the credit needs of its entire community. A bank that received a CRA rating of substantial noncompliance would have been subject to enforcement actions authorized by the Federal Deposit Insurance Act. In a letter to OCC dated December 15, 1994, the Department of Justice (Justice) concluded that the agencies lack legal authority to use cease and desist orders and civil money penalties to combat noncompliance with CRA. The final regulations did not contain the enforcement provisions, but, consistent with the statute, did require that the CRA record be taken into account in the application process. Since the initial enactment of CRA, the regulations that implement the act have been amended. In 1993, the Clinton Administration instructed the federal bank regulators to revise the CRA regulations by moving from a process- and paperwork-based system to a performance-based system focusing on results, especially the results in LMI areas of an institution’s communities. Based on these instructions, the federal banking agencies replaced the qualitative CRA examination system with a more quantitative system that is based on actual performance. For large retail institutions, CRA performance is measured through the use of three tests as follows. The lending test entails a review of an institution’s lending record, including originations and purchases of home mortgage, small business, small farm, and, at the institution’s option, consumer loans throughout the institution’s service area, including the LMI areas. The lending test is weighted more heavily than the investment and service tests in the institution’s overall CRA rating. The investment test evaluates an institution’s investment in community development activities. The service test requires the examiner to analyze an institution’s system for delivering retail banking services and the extent and innovativeness of its community development services. In May 1995, the bank regulators issued the new CRA regulations (the performance-based CRA regulations). For large institutions, the performance-based CRA regulations became effective on July 1, 1997. Therefore, CRA ratings that FRB relied upon in the six merger applications we considered were mostly from the previous process- and paperwork- based system. Most of the bank subsidiaries of the BHCs we reviewed were national banks regulated by OCC. The exception was Chemical Bank, which is a state-chartered bank regulated by FRB and the New York State Banking Supervisor. HMDA was enacted to provide regulators and the public with information on home mortgage lending so that both could determine whether institutions were serving the credit needs of their communities. HMDA was amended in 1989 to include the collection of data on the race, sex, and income of applicants and the action taken on the application. Home mortgage lenders that are required to report are to submit HMDA data files for each loan application. HMDA reporting requirements first only applied to banks and their subsidiaries. Over the years, Congress has expanded HMDA’s coverage to include mortgage banking subsidiaries of bank holding companies and independent mortgage companies that have assets above a certain level and a home or branch office in a metropolitan statistical area (MSA). For data collection in 1998, depository institutions with an office in an MSA are covered if they had more than $29 million in assets as of December 31, 1997. Nondepository lenders are covered if they were located in or made loans in metropolitan areas and had assets of more than $10 million or if they originated 100 or more home purchase loans in the preceding year. FRB’s Regulation BB describes the data that depository institutions are required to collect and maintain for CRA purposes. Under revisions of Regulation BB, depository institutions defined as “large” were required, beginning in 1996, to collect and report data annually on the number and dollar amount of their originations and purchases of small loans to businesses and farms and on any community development loans. Only independent institutions with total assets of $250 million or more and institutions of any size if owned by a holding company that has assets of $1 billion or more are subject to the data reporting requirements. The Federal Financial Institutions Examination Council (FFIEC) made the CRA data on 1996 small business lending available to the public in October 1997. The data on business and farm lending reported under the CRA regulations are more limited in scope than data reported on home mortgage lending under HMDA. In particular, the CRA data include information only on loans originated or purchased, not on applications that are turned down or withdrawn by the customer. Also, unlike HMDA data, the CRA data do not include the income, sex, or racial or ethnic background of applicants. Finally, again unlike HMDA data, the CRA data are not reported and disclosed application by application; rather, the data are aggregated into three loan-size categories and then reported at the census tract level. To determine FRB’s legal responsibilities for assessing CRA performance, we reviewed the BHC Act and CRA, their legislative histories, regulations promulgated under each act, and related published materials. To assess FRB’s process for reviewing BHC merger applications for CRA performance, we used a case study approach. We selected, on the basis of the assets of the acquired BHC, the two largest BHC mergers in 1995, the single largest BHC merger in 1996 and again in 1997, and the two largest BHC mergers in 1998. We reviewed the CRA public evaluation reports of the lead bank subsidiaries of the BHCs included in our case studies, internal FRB memorandums and analyses conducted in conjunction with the six merger applications, orders publicly issued by the Board of Governors containing approval of each merger, public comment letters, and FRB summaries of concerns raised in public comment letters. The scope of our reported findings on how FRB addressed the principal public concerns was limited by the confidentiality of particular FRB analyses and conclusions. FRB’s process for the six mergers in our study cannot be generalized to all large BHC mergers because of the small sample size (i.e., six mergers) and the judgment involved in selecting the sample. We focused on FRB’s BHC merger application process in reviewing the six mergers. We did not assess the quality of previous CRA examinations conducted by primary banking regulators or the accuracy of public comments. We also did not verify the accuracy of data and other inputs relied upon by FRB in its review of the six merger applications. To address the third objective on premerger and postmerger home mortgage lending for three of the six mergers completed in 1995 and 1996, we obtained and analyzed HMDA data. We did not verify the accuracy of the HMDA data. In addressing our three objectives, we interviewed officials from FRB, the Federal Reserve Bank of New York, OCC, the Office of New York State’s Supervisor of Banking, the BHCs included in our case studies, the American Bankers’ Association, the Consumer Bankers’ Association, and a selected number of community groups submitting public comments in opposition to the mergers included in our case studies. We also reviewed relevant published literature on CRA, home mortgage lending, and the use of HMDA data. Appendix I provides a more detailed discussion of our scope and methodology. We conducted our work in Charlotte, NC; Chicago, IL; New York, NY; and Washington, D.C., between June 1998 and August 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from FRB and OCC. FRB’s and OCC’s written comments are discussed near the end of this letter and are reprinted in appendixes VI and VII, respectively. In addition, we provided Bank One, Chase Manhattan, and Fleet the section of our draft report from our HMDA analysis on their respective institutions. We incorporated their technical comments where appropriate. In acting on a BHC merger application, FRB must consider the convenience and needs of the community to be served under the BHC Act and take into account the records of the relevant depository institutions under CRA. Neither the BHC Act nor CRA, or their legislative histories, provide guidance on how FRB is to take into account the convenience and needs of the community when considering a BHC merger application. The federal regulators, including FRB, have developed guidance on how to assess a depository institution’s CRA performance. However, FRB has not developed guidance on how it will evaluate the CRA record, comprising the regulators’ ratings of institutions’ CRA performance and comments from the public, for large BHC merger applications. Under the BHC Act, FRB is required to review the bank holding company’s merger application for the convenience and needs of the communities to be served. FRB has defined convenience and needs to relate to the effect of a proposal on the availability and quality of banking services in a community. FRB considers convenience and needs as including the record of CRA performance. The requirement to consider the convenience and needs of the community has been included as part of the BHC Act since its original enactment in 1956. In the 1970s, Congress increased the need for depository institutions to focus on the convenience and needs of local communities when it passed CRA. CRA was passed in response to a national concern over redlining practices. CRA requires federal regulators, including FRB, to take into account the CRA record of the applicant in their evaluation of an application related to a deposit facility. CRA defines applications to include (1) applications to establish or relocate a branch or home office and (2) applications for mergers, consolidations, or the purchase of assets or assumption of liabilities of a regulated financial institution. Nonbank subsidiaries of BHCs are not subject to CRA. However, CRA regulations allow bank subsidiaries of BHCs to receive CRA credit for home mortgage loans originated by affiliated nonbank subsidiaries in the delineated areas of the bank subsidiaries. The federal depository institution regulators, including FRB, have developed guidance, using rulemaking and additional efforts, on how CRA performance should be considered during the applications process for depository institutions. In 1989, the federal bank regulators published The Statement of the Federal Financial Supervisory Agencies Regarding the Community Reinvestment Act (the Statement). The Statement was designed to provide federally insured financial institutions and the public with guidance regarding the requirements of CRA and the policies and procedures the agencies will apply during the depository institution application process. After the performance-based CRA regulations were issued in 1995, FFIEC published Interagency Questions and Answers Regarding Community Reinvestment in 1997 and 1999. The 1989 Statement was withdrawn effective April 5, 1999, and replaced by the Interagency Questions and Answers Regarding Community Reinvestment. The 1989 Statement, which was in effect during the mergers contained in our study, included guidance on the following issues: the basic components of an effective CRA policy, the role of examination reports on CRA performance in reviewing the need for periodic review and documentation by financial institutions of their CRA performance, and the role of commitments in assessing an institution’s performance. Most notably, the regulators concluded in the Statement that the CRA record of the institution, as reflected in its examination reports, would be given great weight in the application process. In the Interagency Questions and Answers for 1999, the regulators continued to stress the significance of the CRA examination in the application process, and they stated that the examination is an important, and often controlling, factor in the consideration of an institution’s record. In addition to the CRA examination, the regulators have consistently underscored the importance of public comments to the applications process. According to the 1989 Statement, the CRA examination is not conclusive evidence in the face of significant and supported allegations from a commenter. Moreover, the balance may be shifted further when the examination is not recent or the particular issue raised in the application proceeding was not addressed in the examination. During the development of the performance-based CRA regulations, a number of commenters expressed concern that the regulators may provide a “safe harbor” to depository institutions from challenges to their CRA performance record in the application process if they achieved an outstanding CRA examination rating. However, in the preamble of the 1995 Final Rule on the CRA regulations, the regulators reconfirmed the importance of the public comments in the applications process by acknowledging that materials relating to CRA performance received during the applications process can and do provide relevant and valuable information. For each BHC application submitted, FRB publicly issues an Order containing its application decision and a discussion supporting its decision. FRB officials told us that Board Orders provide a detailed explanation of how the Board arrived at its decision and puts the facts into the context of the specific case at hand. In the FRB officials’ view, Board Orders provide guidance on FRB’s BHC application process. We reviewed the Board Orders approving the six BHC merger applications in our study. The Orders provided insight into issues considered by the Board of Governors. For example, the Orders discussed FRB’s consideration of CRA performance ratings received by bank subsidiaries, recent trends in home mortgage lending by the BHCs, and CRA agreements reached by BHCs with community groups. FRB’s treatment of the various CRA issues appeared to be consistent with that suggested in the 1989 Statement for assessing CRA performance. The BHC Act requires FRB’s approval for formation of a BHC, BHC acquisition of control of another BHC or a subsidiary bank or bank assets, or the merger of BHCs. There were nearly 6,000 BHCs operating as of year-end 1998; almost 700 BHC cases of applications were submitted to the Federal Reserve for approval in 1998. Of these, over 400 were for mergers and acquisitions. Consistent with the Statement for assessing CRA performance, FRB regulations provide that FRB will take into account the record of performance under CRA of each insured bank and thrift controlled by a BHC applicant and each subsidiary bank proposed to be controlled by an applicant. FRB officials told us that if an institution was examined recently, FRB would be more likely to rely on the rating given by the bank’s primary regulator. If the CRA exam is not recent or there have been significant public comments raising concerns, FRB would be more likely to undertake a review of the institution’s CRA performance and obtain more information from the primary bank regulator. FRB considers the CRA performance of the BHC in the delineated areas of its bank subsidiaries. Also consistent with the Statement for assessing CRA performance, FRB regulations require public notice of a BHC application and a specific public comment period. FRB does not have written guidelines that summarize how public comments raising CRA concerns are to be used along with other information in its BHC merger application decisions. An FRB Associate General Counsel told us that although the BHC Act does not require a public comment period, FRB voluntarily adopted the requirement of a notice, comment, and specific comment period because FRB found the public process helpful. FRB’s Rules of Procedure state that an applicant must file notice of the application in the classified advertising legal notices section of the local newspaper. The notice must state that the public has an opportunity to comment for at least 30 days after the date of publication. Under the revised Regulation Y, FRB will not accept late written comments except in extraordinary circumstances. FRB can extend, and has extended, the 30- day time frame. According to Regulation Y, the 30-day comment period is required for all BHC merger applications to acquire an insured depository institution whether the applications are Board Action cases or delegated to the Reserve Banks for a decision. BHC officials we interviewed told us that FRB’s adherence to the public comment period deadlines was better than it had been in previous BHC mergers. Relatively few BHC mergers have been protested on CRA grounds. As shown in table 1, the number of BHC merger/acquisition cases that received CRA protests was small during the period of 1995-98. In 1998, the total number of BHC acquisition/merger cases that were protested on CRA grounds was 18 cases out of 424 BHC merger/acquisition cases. The Statement for assessing CRA performance does not specifically address issues that arise in BHC merger application decisions, such as the consideration to be given to the activities of nonbank subsidiaries. Large BHCs comprise bank subsidiaries that are subject to CRA operating in delineated areas and may include nonbank subsidiaries, such as mortgage lending companies, that are not subject to CRA operating within and outside of the delineated areas of the bank subsidiaries. However, CRA regulations allow bank subsidiaries of BHCs to receive CRA credit for home mortgage loans originated by their affiliated nonbank subsidiaries in the delineated areas of the bank subsidiaries. In reviewing the six BHC merger applications, it appeared to us that FRB attempted to balance the CRA performance ratings with information that raised concerns with the institutions’ CRA performance obtained through the public comment process. All of the bank subsidiaries in our selected merger cases received a satisfactory or better CRA rating from their primary federal bank regulator. The four principal CRA concerns raised in public comments were (1) an insufficient amount of home mortgage lending in LMI areas, (2) an insufficient amount of small business lending in LMI areas, (3) expected bank branch closures in LMI areas, and (4) a lack of specificity in CRA agreements. FRB appeared to give more weight to CRA performance ratings and concerns with home mortgage and small business lending than to other concerns raised. FRB conducted analyses with HMDA and CRA small business data to address concerns of insufficient home mortgage and small business lending, respectively. FRB’s consideration of branch closures was generally limited to a determination of whether the applicant had an adequate branch closure policy and its past branch closure record. According to FRB officials, CRA agreements did not play a role in FRB’s assessment of the six merger cases. FRB does not have written guidance on how it considers the sufficiency of home mortgage and small business lending or what branch closure policy it would consider adequate. FRB’s lack of written guidance on how it addresses public comments contributed to the concerns voiced by the community groups and BHC applicants we contacted regarding the lack of transparency in the merger application process. All of the bank subsidiaries in the six merger cases received a CRA performance rating of satisfactory or better. Over half of the lead bank subsidiaries owned by the applicants and the target institutions received an outstanding CRA rating from their primary bank regulators. CRA performance ratings for the bank subsidiaries in our study are presented in appendix II. The CRA ratings of the bank subsidiaries in our study were similar to the CRA ratings of their peers. As table 2 shows, all large bank subsidiaries (assets of $10 billion or greater) examined by OCC and FRB received either outstanding or satisfactory ratings during the period of 1995-98. community groups and the level of specificity differentiates the various types of agreements. In some cases, community groups negotiate with the banks regarding specific CRA goals to be reached in the community. These are referred to as negotiated agreements. Another type of community agreement is a pledge. Generally, banks that make pledges consider input from community groups, but the bank unilaterally formulates the final pledge. mergers were completed under the old process-oriented CRA regulations. According to the DCCA Manager for Applications, DCCA will do additional analysis on the CRA records of the applicant and the target institution when comments regarding the institutions’ CRA records are sent. DCCA was generally dependent on CRA examination information from the other federal bank regulators for assessing the CRA performance of large BHCs. In the six merger cases, FRB did not have its own on-site CRA information on the bank subsidiaries. A federal regulator other than the FRB supervised almost all of the bank subsidiaries of the BHCs in the six merger cases. Of the six merger cases, only the lead bank of the Chemical Banking Corporation was supervised by FRB. The lead banks of the other 11 BHCs were supervised by OCC. DCCA staff told us that FRB does not second-guess the CRA examinations conducted by the other federal bank regulators. The purpose of FRB’s review of the CRA record is not to reexamine the banks for CRA compliance. We were told by DCCA analysts that after the initial screening of the CRA ratings, they reviewed the most recent public evaluation report of the lead bank of the applicant and the target institution. If the DCCA analyst determined it was warranted, he or she talked with the OCC CRA compliance examiner. In three of the five merger cases in which FRB was not the primary bank regulator of the lead bank of the applicant—Fleet- Shawmut, NationsBank-BankAmerica, and Bank One-NBD First Chicago, DCCA analysts contacted OCC for additional supervisory information. FRB officials told us that additional supervisory information was not obtained in the NationsBank-Boatmen’s merger because the July 1995 CRA examination of NationsBank was relatively current. In two of the merger cases, NationsBank-BankAmerica and Bank One-NBD First Chicago, DCCA reviewed CRA information from OCC that was more than 2 years old. During the application review, OCC was examining the lead banks of both NationsBank and Bank One. In the absence of recent examinations of the lead banks, DCCA analysts obtained limited information from OCC’s ongoing examinations for these two cases. According to an OCC official, the implementation of OCC’s new performance-based CRA examination procedures for the 30 largest national banks caused delays in the frequency of examinations for these institutions. FRB received public comments addressing a wide variety of issues, including CRA issues, for all six mergers. The number of comments ranged from a high of over 1,600 comments for NationsBank’s acquisition of BankAmerica to a low of 17 comments for NBD’s acquisition of First Chicago. The number of public comments that FRB received for the other 4 mergers ranged from about 50 to about 300. For each merger, the majority of the comments were in support of the merger. Among the comments in opposition to each of the six mergers, FRB received public comments criticizing the CRA performance of either the applicant or the target institution. In addition to considering written comments, FRB conducted public meetings for four of the six mergers: (1) Fleet’s acquisition of Shawmut, (2) Chemical’s acquisition of Chase, (3) NationsBank’s acquisition of BankAmerica, and (4) Bank One’s acquisition of NBD First Chicago. The four principal CRA concerns raised in the six mergers were (1) an insufficient amount of home mortgage lending in LMI areas, (2) an insufficient amount of small business lending in LMI areas, (3) expected bank branch closures in LMI areas, and (4) the lack of specificity in CRA agreements. A summary of comments raising these concerns for each of the six BHC mergers is presented in appendix III. For the six mergers, commenters raised concerns that either the applicant’s or the target institution’s performance was generally inadequate in providing mortgage lending to minority groups and in LMI areas. In many cases, commenters included statistical results from HMDA analysis to help support their claims of insufficient home mortgage lending. FRB received comments alleging an insufficient level of small business and rural lending for two mergers, NationsBank’s acquisition of BankAmerica and Bank One’s acquisition of NBD First Chicago. Comments related to small business lending only affected the later two BHC mergers because banks were not required to collect small business data and submit the data to their primary bank regulator until 1996. In all six mergers, commenters were concerned with the number and location of banking branches that would be closed in LMI areas after the merger and the resulting impacts on LMI areas. Commenters generally referred to bank holding company branch closure practices in previous mergers to support their claim that the pending mergers would result in similar closings. For example, during the application process for Bank One’s acquisition of NBD First Chicago, a community group cited Bank One’s closure of branches after its acquisition of First USA and noted that the branches closed by Bank One were located in predominantly minority communities and LMI areas. Community groups wanted CRA agreements that centered on banks’ establishing, or pledging, specific lending and investment activities that serve the banks’ delineated areas, including LMI areas. The community groups we contacted told us that CRA agreements are beneficial in meeting the convenience and needs of LMI communities, such as obtaining affordable mortgage loans or small business loans. Of the six mergers we reviewed, FRB received comments on the issue of community agreements or pledges for three BHC mergers: Chemical’s acquisition of Chase, NationsBank’s acquisition of BankAmerica, and Bank One’s acquisition of NBD First Chicago. Pledges issued by Chemical Bank and NationsBank were criticized for lacking specific lending goals. Chemical Bank issued a pledge for increased lending and community development funding of $18.1 billion primarily in New York, New Jersey, Connecticut, and Texas. The goals of the pledge included loans and investments to assist small businesses, affordable mortgages, and commercial and economic development. According to the summary of comments prepared by FRB, commenters criticized the pledge as inadequate because it was not enforceable, could not be monitored by community groups, was too vague to be meaningful, and did not identify the amount of lending that would be made within specific communities. Before its merger with BankAmerica, NationsBank made a 10-year pledge of $350 billion, for community development lending and investment. The comments were similar to those made for Chemical Bank’s pledge. NationsBank’s pledge was also criticized for lacking geographic detail and enforceability. In 1998, before Bank One’s acquisition of NBD First Chicago, a Chicago community group obtained a CRA commitment from NBD First Chicago and Bank One. The CRA commitment included, among other features, increased bank lending to small businesses in Chicago’s LMI areas. Community groups criticized Bank One for not making commitments in other areas where Bank One is located. FRB attempted to address three of the four CRA concerns that were raised in public comments. FRB appeared to give more weight to concerns with home mortgage and small business lending than to branch closure concerns raised. DCCA conducted analyses of HMDA and CRA small business data to address concerns of insufficient home mortgage and small business lending when it became available, respectively. Generally, the statistical results from DCCA’s analyses indicated that the lending activity in question was sufficient. In situations where statistical results from DCCA’s HMDA analyses indicated that the lending activity in question may not have been sufficient, FRB generally emphasized CRA performance ratings and cited limitations in the use of HMDA statistics. FRB faced limitations in its legal authority to address branch closure concerns. FRB approved four of the mergers with conditions for the reporting of branch closures. According to FRB officials, CRA agreements did not play a role in FRB’s assessment of the six merger cases. For each merger, DCCA prepared a memorandum to the Board of Governors containing findings and recommendations. The Board of Governors accepted DCCA’s recommendations for each of the six mergers. To address the concern of insufficient home mortgage lending, FRB’s DCCA generated a large number of statistical tabulations using HMDA individual loan file data containing the mortgage lending activity for each BHC across a large number of geographic areas. DCCA analysts reviewed HMDA data submitted by commenters, but used their own HMDA data analysis. According to DCCA analysts, many of the commenters did not include the home mortgage lending of the nonbank subsidiaries in their analysis. In its HMDA analysis, DCCA included home lending of nonbank mortgage subsidiaries in the delineated areas of the bank subsidiaries because such lending qualifies for CRA credit. Examples of FRB analysis with HMDA data in response to public comments are contained in appendix IV. For each merger application, DCCA produced statistical tabulations for geographic areas where home mortgage lending concerns were raised. For example, NationsBank’s acquisition of BankAmerica generated a large number of comments raising concerns in a number of states, counties, and MSAs. For each geographic area (i.e., a state, county, or MSA) where concerns of insufficient mortgage lending in LMI areas were raised, the statistical tabulations were uniformly reported. The statistical tabulations generated and analyzed by DCCA generally did not cover subsets of LMI areas. To respond to comments on these areas, DCCA analysts told us that they relied on information supplied by the applicant or the Federal Reserve Bank analyzing the merger. DCCA analyzed the statistical tabulations and prepared a memorandum for each BHC application to the Board of Governors. The memorandums focused on the tabulations that the DCCA analysts thought would be most useful to the Board. Additional statistics were provided in an appendix to each memorandum. The Board voted to approve each merger in our study. Statistics in the memorandum for each BHC merger application generally emphasized the recent trends in mortgage applications from the LMI areas and minority group applicants referenced in the comment letters. In most cases where public concerns of insufficient mortgage lending were raised, the statistical tabulations within the memorandum indicated that applications from LMI areas and the referenced minority group’s applicants increased in the most recent 2- to 3-year period before the merger application. In situations where statistical results from FRB’s HMDA analyses appeared to indicate that the lending activity in question may not have been sufficient, FRB tended to emphasize CRA performance ratings and cited limitations in the use of HMDA statistics. There are other measures of mortgage loan sufficiency that were not contained in the memorandums. In particular, DCCA analysts calculated the portfolio share of a BHC’s total mortgage originations in the relevant state, county, or MSA accounted for by mortgage originations from LMI census tracts and applicants classified with reference to a particular minority group. The portfolio shares for all institutions originating mortgages in the relevant state, county, or MSA were also generated by DCCA. This statistic can be considered a benchmark to which each BHC’s portfolio share could be compared. Examples of these statistics are included in appendix IV when we refer to all institutions in the six tables. Generally, the BHCs’ portfolio shares were similar to or exceeded the corresponding portfolio share for all institutions. Generally, the statistical results from DCCA’s analyses indicated that the lending activity in question was sufficient. Therefore, the Board generally found that the commenters’ concerns were not supported by DCCA’s HMDA analysis and the institution’s CRA record. Most of the comments that raised concerns of insufficient mortgage lending were directed toward delineated areas of the BHCs’ bank subsidiaries subject to CRA. However, comments were received that raised such concerns for nonbank mortgage lending subsidiaries outside of the delineated areas of the BHCs’ bank subsidiaries. For example, 1 comment on Chemical’s acquisition of Chase stated that Chase did not make substantial loans to applicants from LMI communities in 15 MSAs, many of which were not included in the delineated areas of Chase’s bank subsidiaries. We identified one commenter who made this general comment for numerous BHC mergers. He told us that FRB has a responsibility to address such comments because the BHC Act, which governs the bank and nonbank subsidiaries of a BHC, calls upon FRB to assess the impacts of the BHC merger on convenience and needs. FRB responded to this general comment by stating that nonbank subsidiaries of BHCs are not subject to CRA and their lending is only relevant in the delineated areas of the bank subsidiaries. According to a DCCA analyst, the purpose of CRA is to encourage the bank to make loans where it is collecting deposits. In Bank One’s 1998 acquisition of NBD First Chicago, a Wisconsin community group stated that the majority of Bank One’s small business lending was targeted to larger businesses, and that the bank’s volume of small farm loans was low. In addition to requesting that Bank One respond to this criticism, the DCCA analyst performed her own analysis of Bank One’s small business lending. In NationsBank’s 1998 acquisition of BankAmerica, FRB assessed small business lending and small farm lending in seven states. In this case, the DCCA analyst performed analysis of NationsBank’s small business lending. FRB did not find a basis for concern in either case. NationsBank’s portfolio share was less than the corresponding portfolio share for all institutions (see table 5 in app. IV). the reason for the closures, the proximity of the receiving branch, and what actions the applicant plans to take to mitigate the impact on that community. The officials stated that they undertook such an analysis on the Chemical-Chase merger application. The law does not provide the regulators with the authority to prohibit banks from closing a branch. If the applicant has not developed final plans for branch closings, FRB’s consideration of branch closures is limited to a determination of whether the applicant has an adequate branch closure policy, and any branch closings that do occur can only be assessed in future CRA examinations and BHC merger applications. Branch closures could affect a bank’s subsequent CRA performance rating if the closures were associated with a decline in lending, investment, or services in the bank’s delineated areas. The Board of Governors placed a branch closure reporting requirement on four of the BHC mergers as a condition for approval. According to FRB officials, when a branch closure reporting requirement is placed on an applicant, a message is sent to the applicant that the Board is interested in such plans and will be reviewing the closures associated with the application in the context of future applications. Because FRB cannot prohibit banks from closing branches, it is unclear what effect the conditional approvals would have on the number of branch closings in LMI areas. Depository institution regulators do not have the legal authority to prohibit banks from closing a branch. Insured banks and thrifts must post notice to the public at least 30 days before closing a branch and provide their regulators with at least a 90-day notice. Under performance criteria of the CRA examination’s Service Test, the regulators are to review the bank’s (1) distribution of branches among low-, moderate-, middle-, and upper- income areas and (2) record of closing and opening branches, particularly in LMI areas. However, if a bank can demonstrate to the examiner that retail banking services can be provided to LMI areas through alternative systems, such as automated teller machines, telephone banking, or mobile banking, the bank can receive credit under the Service Test without the brick and mortar of a branch. In four of the six merger cases, the Board of Governors placed a reporting requirement regarding branch closures as a condition for approval. The four mergers were Fleet’s acquisition of Shawmut, NationsBank’s acquisition of Boatmen’s, NationsBank’s acquisition of BankAmerica, and Bank One’s acquisition of NBD First Chicago. For each of the four mergers, the Board of Governors required the applicant to provide the Federal Reserve System with periodic reports on the number of branch closings resulting from the merger and to show how it planned to minimize the impact of these closings on LMI areas. According to the DCCA Manager for Applications, applicants are not required to submit branch closure plans as part of the application. However, the Board of Governors generally orders branch closure reports from those applicants who have not submitted branch closure plans during the application process. Except for Fleet Financial Group, none of the four, who were required to submit reports, had submitted a branch closure plan. Because FRB cannot prohibit banks from closing branches, it cannot directly affect branch closures in designated areas. Branch closures in LMI areas, however, could potentially affect future CRA performance ratings. In addition, FRB officials told us that the applicant may apply for merger again in the future. For example, the DCCA analyst, who reviewed the NationsBank-BankAmerica merger, told us that they considered NationsBank branch closures subsequent to its acquisition of Boatmen’s in approving its merger with BankAmerica. According to FRB officials, CRA agreements did not play a role in FRB’s assessment of the merger application. This view is supported by statements in the Board’s Orders. Using the 1989 Statement as its basis, FRB considers CRA agreements as private agreements between the banks and the community groups. DCCA officials told us that CRA does not provide the regulators with the enforcement authority to assess a bank’s compliance with CRA agreements. Pledges were not considered either. DCCA staff said they did not consider the pledges of Chemical Bank and NationsBank or the commitment negotiated by NBD First Chicago when developing their recommendations to the Board of Governors. BHC officials and community groups we interviewed had opposing views on whether FRB should consider the agreements during the application process. The BHC officials we interviewed supported the position that regulators should not consider CRA agreements as part of the institutions’ CRA record. We were told by community development officials at two BHCs that the CRA agreements are significant in terms of external relations for the BHC with its communities. According to these officials, the primary purpose of the commitments and pledges was not to influence the regulatory process, since FRB does not consider the agreements of the applicant as a part of its analysis of the applicant’s CRA record. Alternatively, community groups we interviewed want FRB to consider the banks’ compliance with those agreements as part of its assessment of the applicant’s CRA record. FRB’s lack of written guidance for how it addresses public comments contributed to the concerns voiced by some community groups and two BHCs regarding the lack of transparency in the merger application process. Several of the community groups who submitted comments told us that they did not understand the process by which FRB approved the six BHC mergers and how FRB considered their public comments raising concerns. The community group officials told us that FRB does not have written criteria for how it assesses merger applications, and FRB did not explain its process when community groups met with FRB officials. Some of the officials told us that while FRB conducted HMDA analysis, it did not criticize the applicant’s lending performance on the basis of the analysis. Two BHC community development officials told us that they did not understand why they needed to provide the Federal Reserve with redundant information when they had established good CRA records. One BHC official stated that if a bank has been examined for CRA, why should the Federal Reserve have to reexamine the bank. The banking official’s perception of FRB’s CRA review was different from that of FRB officials who do not consider their review process to be a reexamination of the bank’s CRA performance. The BHC officials told us that during the application process, FRB will ask for redundant information. According to the officials, even if FRB has requested information from the applicant on a particular issue, it would request the same information again from the applicant if it subsequently received comment letters on the same issue. Commenters who raised concerns often expressed judgments that were critical of the BHC applicant, the BHC to be acquired, and the bank and nonbank subsidiaries of the BHCs. The merging BHCs have a business interest in completing the merger in a timely manner with minimal disruption to their future consolidation efforts. Therefore, the implications of FRB’s actions are of major importance to the parties involved in this process. By analyzing three large BHC mergers using appropriate statistical measures and benchmarks for lending performance, we found that after none of the three mergers was there a disproportionate decline in single- family home mortgage lending to minority and LMI census tracts. NBD Bancorp’s acquisition of First Chicago was associated with fairly stable market share of loans in LMI and minority census tracts in the Chicago MSA. Fleet Financial Group’s acquisition of Shawmut National Bank was associated with a decline in Fleet’s market share in minority and LMI census tracts that mirrored Fleet’s decline in overall market share in the Boston MSA. Chemical Bank’s acquisition of Chase Manhattan Bank was associated with increased market and portfolio share lending in minority and LMI census tracts in 1997, as compared to the combined lending by the two competing institutions in 1995. Using HMDA data for each institution, we constructed and analyzed its market share of loan originations and the distribution of originations (portfolio share) across specified geographic areas. Our statistical results using two measures, one for conventional loans and one for all loans, were generally consistent with one another. For each universe of home mortgage lending used, we calculated the market share of loan originations in LMI, minority census tracts, and all census tracts that made up the MSA. We also calculated the portfolio share of loan originations in LMI and minority census tracts by the combined BHC. The market share of loan originations is defined as the number of loan originations for a given institution divided by the number of loan originations by all lenders in the census tracts being analyzed. Portfolio share is defined as the number of loan originations for a given institution in the LMI and minority census tracts being analyzed divided by the number of loans originated by the institution in the MSA. The lending of both BHCs before the merger and the lending of the combined BHC after the merger were included in our market and portfolio share measures. In addition, the market share of loan originations by both BHCs in all census tracts in the MSA was used as a benchmark in assessing market share changes in LMI and minority census tracts. Our statistical results were also generally consistent using two different universes of home mortgage lending, (1) conventional, single-family home purchase loan originations and (2) all single-family mortgage loan originations. A conventional, single-family home purchase mortgage loan is defined as a single-family mortgage loan that is not insured or guaranteed by the federal government, and that is for the purpose of financing the purchase of a home. For each merger, we analyzed market and portfolio shares for the years beginning 1 year before the acquisition through the 2nd year after the acquisition was completed. For the three case studies used here, the market corresponded to the MSA where the acquired BHC’s lead banking subsidiary is located. The resulting MSAs were Chicago, Boston, and New York for NBD’s acquisition of First Chicago, Fleet’s acquisition of Shawmut, and Chemical’s acquisition of Chase, respectively. As shown in table 3, NBD’s 1995 acquisition of First Chicago is associated with fairly stable market share in the Chicago MSA and a slight increase in market share in both LMI and minority census tracts. For conventional, single-family home purchase loans, the market shares for both LMI and minority census tracts increased from 1994-97. Portfolio shares rose slightly for conventional home purchase loan originations in LMI and minority communities. The overall increase in conventional, single-family home purchase loan originations for the Chicago MSA was modest in magnitude during the period of 1994-97. Fleet’s 1995 acquisition of Shawmut National Bank is associated with a reduction in mortgage lending to LMI and minority census tracts that mirrored Fleet’s overall reduction in mortgage lending for the Boston MSA. According to Fleet Financial Group officials, over the period of 1994-97, the Boston MSA experienced a significant influx of mortgage lenders that resulted in competitive pressures and a subsequent reduction in residential mortgage lending among existing lenders in that market. As shown in table 4, market share statistics for conventional, single-family home purchase loan originations indicated large declines for both LMI and minority census tracts as well as overall in the Boston MSA. Portfolio shares only declined slightly. As shown in table 5, Chemical Bank’s 1996 acquisition of Chase Manhattan Bank is associated with increased market and portfolio shares of loan originations in LMI census tracts. Market and portfolio shares in minority census tracts were fairly stable. In acting on a BHC merger application, FRB must consider the convenience and needs of the community to be served under the BHC Act and take into account the CRA records of the relevant banks. FRB’s Regulation Y, promulgated under the BHC Act, requires public notice of a BHC application and a specific public comment period. FRB said it voluntarily adopted the requirement of notice, comment, and specific comment period because it found the public process helpful. In the six BHC merger applications that we reviewed, it appeared to us that FRB attempted to balance the CRA performance ratings of the bank subsidiaries of the merging BHCs with information presented through public comments that raised concerns with the institutions’ CRA records. Both the BHC applicants and the community groups raising CRA concerns lacked relevant information on how FRB analyzes an institution’s CRA record. The implications of FRB’s actions are of major importance to the parties involved in the BHC merger application process. The merging BHCs have a business interest in completing the merger in a timely manner with minimal disruption of their future consolidation efforts. The community groups that submit comments on large BHC mergers have an interest in ensuring that specific community credit issues are being addressed. A more transparent process is needed regarding how FRB balances the CRA ratings of banks, particularly those with good CRA ratings, such as the banks in our study, with public comments raising CRA concerns. A more transparent process could be useful for both BHC applicants and public commenters. Enhanced transparency could improve the BHC applicants’ understanding of what information is expected of them, what role public comments play in FRB’s CRA review, and what information FRB focuses on in response to different CRA concerns. In addition, a more transparent process may contribute to more focused public comments from community organizations and provide commenters with knowledge of how FRB analyzes an institution’s CRA record, such as its home mortgage lending performance. To enhance the transparency and improve the efficiency with which CRA concerns are addressed in the BHC merger application process, we recommend that FRB develop written guidelines that summarize how public comments raising CRA concerns are used with CRA examination information in FRB’s merger application decisions for large BHCs. For example, such guidelines could summarize important conclusions from previous Board of Governors application decisions. Such guidelines could also include when and how concerns raised in public comments will be considered, the types of analyses FRB is likely to conduct and rely upon in reaching its conclusions, and the situations in which HMDA statistics are limited. We received written comments on a draft of this report from FRB that are reprinted in appendix VI. FRB generally agreed with our recommendation that it develop written guidelines to enhance the transparency of the process. The letter stated that FRB will consider how best to convey useful information focusing on the CRA aspects of the application process and discussed information that could be included in an FRB guide to the process. In addition, FRB provided technical comments, which we have incorporated where appropriate. We received written comments on a draft of this report from OCC that are reprinted in appendix VII. OCC also provided technical comments, which we have incorporated where appropriate. We are sending copies of this report to Senator Phil Gramm and Senator Paul Sarbanes and to Representative Barney Frank, Representative John LaFalce, Representative Rick Lazio, Representative Jim Leach, Representative Marge Roukema, and Representative Bruce Vento in their capacities as Chair or Ranking Minority Member of Senate and House Committees and Subcommittees. We are also sending copies of this report to the Honorable Alan Greenspan, Chairman of the Board of Governors of the Federal Reserve System; the Honorable John Hawke, Comptroller of the Currency; and others upon request. Please call me or Bill Shear, Assistant Director, at (202) 512-8678 if you or your staffs have any questions concerning this report. Key contributors to this report are acknowledged in appendix VIII. To provide a more detailed description of our scope and methodology, this appendix supplements our discussion contained in the letter of this report. Our legal analysis included a review of the Bank Holding Company Act of 1956 (BHC Act) and the Community Reinvestment Act of 1977 (CRA). Included in this review was our analysis of statutory amendments to the BHC Act and court decisions addressing the convenience and needs factor in the BHC Act. To identify the principal CRA comments submitted to the Federal Reserve Board (FRB) on each of the six mergers, we reviewed summaries of comments prepared by FRB’s Legal Division and the Division of Consumer and Community Affairs (DCCA). The Legal Division wrote summaries for four merger applications—Fleet Financial Group’s acquisition of Shawmut National Corporation, Chemical Banking Corporation’s acquisition of Chase Manhattan Corporation, NationsBank Corporation’s acquisition of BankAmerica Corporation, and Bank One Corporation’s acquisition of NBD First Chicago Corporation. To verify the completeness of the Legal Division’s and DCCA’s summaries, we developed a data collection instrument, took a sample of comment letters from Chemical’s acquisition of Chase Manhattan and NationsBank’s acquisition of BankAmerica and compared our data with the written summaries. From our sampling of these comment letters, we determined that the Legal Division’s and DCCA’s summaries of public comments were accurate. We focused our attention on public comments addressing CRA performance measures. We did not analyze comments raising employment, safety and soundness, or competitive issues. We also did not analyze comments raising personal complaints (e.g., “I did not receive a loan”) or managerial issues if they were not directly tied to CRA performance. We did not assess the validity of the public comments or verify the accuracy of data submitted with the comments. We also did not verify the accuracy of the data FRB relied upon in its response to public concerns. To identify how FRB addressed the principal CRA comments for the six mergers in our case study, we reviewed DCCA’s internal memorandums and supporting documentation submitted to the Board of Governors and the Board of Governors’ Orders approving the mergers. We also interviewed officials from DCCA and the Legal Division and officials from the Federal Reserve Bank of New York. Specifically for DCCA, we interviewed the Manager of Applications in DCCA and each analyst who was responsible for assessing the CRA performance of the six mergers. We interviewed officials from Bank America Corporation, Bank One, Chase Manhattan Corporation, and Fleet Financial Group. We also interviewed a number of community groups that submitted comments or testified in public meetings on the bank holding company (BHC) merger applications included in our case studies. To identify how FRB used Home Mortgage Disclosure Act of 1975 (HMDA) analysis to address public concerns, we specified the relevant geographic areas at the state, county, or metropolitan statistical area level of aggregation. We obtained selected reproductions of FRB analyses conducted in response to the principal public concerns raised. FRB officials told us that for some of the older mergers, they had not retained computer-generated output or documentation of the computer programs used to produce the output at the time of the merger application. FRB officials told us that it would be difficult and costly to reconstruct and reproduce the delineated areas for the bank subsidiaries of each BHC at the time of the merger application. FRB officials told us that the statistical tabulations they supplied to us would likely correspond closely to the statistical results obtained when the merger application was being processed at FRB. We did not analyze FRB’s analysis of CRA small business loan file data to address public concerns of insufficient small business lending. To determine the premerger and postmerger mortgage lending in low- and moderate-income (LMI) and minority communities for three mergers, we used HMDA data. FRB provided us with “value-added” HMDA data for the years 1994-98; in these data, the individual HMDA loan files were merged with census tract characteristics from the 1990 Census of Population and Housing. We undertook steps to verify, in part, the accuracy of HMDA data used in our premerger and postmerger HMDA analysis for the three BHC mergers that we reviewed. We reviewed information on the process used by the Federal Financial Institutions Examination Council’s (FFIEC) member agencies for the identification and resolution of errors in the HMDA information submitted by lenders. In November 1994, FRB amended a regulation to require lenders to update the HMDA information on their loan activity on a quarterly basis and to require most lenders to submit their data to the supervisory agencies in a machine-readable form. We discussed HMDA data with FRB and BHC officials. We identified that HMDA data on home improvement loans were not consistently reported by all HMDA reporters because they have the option to report equity lines of credit as home improvement loans. We also obtained a list of the bank and nonbank subsidiaries of the three BHCs who were HMDA filers in the metropolitan statistical areas we were analyzing. We obtained the list of HMDA reporters from DCCA as well as Bank One, Chase Manhattan, and Fleet. In cases where discrepancies were present, we conducted statistical analyses and followed up with inquiries to DCCA and BHCs to reach resolution. In our mortgage lending analysis, we defined a census tract as LMI if median family income for the census tract was less than 80 percent of median family income for the metropolitan statistical area. Consistent with definitions used in an analysis of trends in home purchase lending recently conducted by FRB, we classified a census tract as a minority tract if 20 percent or more of the residents were members of minority groups. This definition of a minority tract therefore includes census tracts that can be characterized as integrated as well as census tracts that have a greater number of minority residents. FRB provided us with 1998 HMDA data that allowed us to calculate portfolio, but not market share, measures of lending performance for Chase Manhattan in the 2nd year after the acquisition was completed. During the time frame of our work, 1998 HMDA data required to calculate market shares were not available. HMDA data alone cannot reflect changes in market conditions that help determine market outcomes. For example, mortgage interest rates change over time, thus affecting the number of households among different income groups that purchase a home or refinance existing mortgages. We calculated portfolio and market shares for both the universe of single- family mortgage originations and conventional home purchase mortgage originations to see if the various statistical results were consistent with one another. We also calculated the BHC’s market share in all census tracts to create a benchmark that can be compared to changes in the BHC’s market share in LMI and minority census tracts. We tested the HMDA data we obtained from FRB for missing variable values. We found that the variables on which we relied, such as HMDA reporter, metropolitan statistical area, census tract number, census tract family income, and census tract minority population, were not missing for the years 1994 through 1998. Tables II.1 through II.12 list the premerger CRA performance ratings for all bank subsidiaries owned by the applicant and the target institutions of the six BHC merger cases that we reviewed. NBD Corporation acquired First Chicago Corporation in 1995. Fleet Financial Group acquired Shawmut in 1995. Chemical Banking Corporation acquired Chase Manhattan in 1996. NationsBank Corporation acquired Boatmen’s Bancshares in 1996. Assets (in billions) NationsBank Corporation acquired BankAmerica in 1998. Bank One Corporation acquired NBD First Chicago in 1998. FRB received both supportive and opposing comments for all six mergers. This appendix provides a discussion of the CRA concerns raised in each merger. Our discussion includes the financial institutions, CRA comments, and geographic areas raised in the concerns. Commenters raised concerns that NBD had inadequate lending in LMI areas in the Detroit MSA, where the lead bank subsidiary was located. Similar concerns were also raised regarding the inadequacy of First Chicago’s lending in the Lake County area of Chicago. Commenters also alleged that NBD redlined many LMI Detroit communities, as evidenced by the lead bank subsidiary’s lack of branch presence and minimal marketing of credit products in these areas. Commenters expressed concerns about inadequate lending by Fleet or its subsidiaries in minority census tracts in the 13 MSAs in New York State. Commenters alleged that the level of mortgage applications that Fleet received in each MSA was not consistent with the demographics of each MSA, and that the application denial rates evidenced disparate lending to minorities and those in LMI census tracts. Concerns were also raised regarding potential branch closures that would result in decreased banking services to LMI neighborhoods. Commenters raised CRA concerns for both Chemical and Chase Manhattan. Concerns were expressed about Chemical’s and Chase’s lending in all states where the banks had a banking presence. Commenters also expressed concern that Chase had inadequate mortgage lending in LMI communities in a broad cross-section of cities, including Chicago, Los Angeles, Atlanta, Detroit, and Dallas. Commenters expressed concern that Chemical lacked home mortgage lending in LMI census tracts in New York, New Jersey, Delaware, Florida, and Texas. Concerns were also raised regarding branch closures. In particular, a number of commenters expressed concern with the impact of Chemical’s announced branch closures in LMI areas of New York City. Commenters expressed concern that NationsBank had inadequate mortgage and business lending to minorities and possible branch closings in Travis County, TX. Similar to the previous NationsBank merger, numerous commenters criticized the lending records of one or both banks in a number of geographical areas. Commenters were concerned that one or both of the banks did not adequately lend to LMI individuals and areas. Concerns were also raised regarding NationsBank’s small business and rural lending, and branch closings. One commenter asserted that the acquisition of BankAmerica would result in branch closings and reductions in banking services to LMI communities. Commenters criticized both BHCs’ home mortgage lending and small business lending in serving the needs of minority borrowers and LMI and rural areas. Some commenters’ concerns were related to Bank One’s April 1998 decision to modify its mortgage lending strategy, which they interpreted as the bank’s plan to exit the mortgage lending business. Commenters feared that such a strategy would have the impact of reduced access to mortgage credit for certain individuals. Commenters also expressed concern about branch closings, including the concern that branch closings would reduce the availability of banking services to individuals in LMI and minority neighborhoods. In addition, commenters expressed concern about Bank One’s refusal to enter into community reinvestment agreements similar to the agreements entered into by First Chicago in Detroit and Chicago. For each merger application, DCCA produced statistical tabulations on each geographic area where home mortgage lending concerns were raised. Tables IV.1 through IV.6 present examples of FRB analysis conducted with HMDA data that were performed in response to public concerns raised in each of the six bank holding company mergers included in our case study. The examples, which represent a small subset of DCCA’s tabulations, are presented in six tables representing (1) NBD’s acquisition of First Chicago in 1995, (2) Fleet’s acquisition of Shawmut in 1995, (3) Chemical’s acquisition of Chase in 1996, (4) NationsBank’s acquisition of Boatmen’s in 1997, (5) NationsBank’s acquisition of BankAmerica in 1998, and (6) Bank One’s acquisition of NBD First Chicago in 1998. Each table includes statistics generated by FRB analysts for census tracts classified as LMI in response to public comments stating that such lending was insufficient. Each table also includes statistics generated for minority applicants in the MSA or region. We reported FRB’s analysis for the minority group accounting for the highest percentage of mortgage originations in the geographic area. When analyses were performed for a number of geographic areas covering one or more parties to the merger, we reported FRB’s analysis for the area and merger partner we considered to be most helpful for illustrating FRB’s process. For example, in NBD’s acquisition of First Chicago, we reported FRB analysis for NBD in the Detroit MSA. FRB also conducted an analysis for First Chicago in the Chicago MSA and in Lake County, IL, in response to public comments on First Chicago’s lending in those geographic areas. This appendix contains our statistical results using a broader universe of all single-family home mortgage lending. We define a single-family mortgage loan as a home purchase, refinancing, or home improvement loan used to finance an one- to four-unit residential structure. Statistical results using our narrower universe of conventional, single-family home purchase loan originations are contained in the body of the letter. Our broader universe of home mortgage lending includes home improvement loans that are not consistently reported by all HMDA reporters. HMDA reporters have the option to report equity lines of credit as home improvement loans. We also include refinancing loans that are more sensitive to interest rate changes as compared to home purchase loans. Our broader universe also includes federally insured loans. NBD First Chicago Bank’s market and portfolio share measures for all single-family loan originations are presented in table V.1. The market share percentages for NBD First Chicago were fairly stable from 1994 to 1997 for both LMI and minority areas. As stated in the letter of this report, Fleet’s 1995 acquisition of Shawmut National Bank is associated with a reduction in conventional, single-family home purchase mortgage lending to LMI and minority census tracts that mirrored Fleet’s overall reduction in mortgage lending for the Boston metropolitan statistical area. According to Fleet Financial Group officials, over the period of 1994 to 1997, the Boston MSA experienced a significant influx of mortgage lenders that resulted in competitive pressures and a subsequent reduction in residential mortgage lending among existing lenders in that market. A generally consistent pattern is found in table V.2 for all single-family loan originations by Fleet in the Boston metropolitan statistical area. Market share declines in LMI and minority census tracts generally mirrored declines for all census tracts. The market share declines in LMI and minority census tracts were accompanied by declines in respective portfolio share measures. Chase Manhattan Bank’s market and portfolio share statistics for all single- family loan originations in the New York City metropolitan statistical area are presented in table V.3. The statistics are comparable to those for conventional home purchase loans discussed in the letter of this report. The market and portfolio shares of lending increased in LMI census tracts. Market share in LMI census tracts increased between 1995 and 1997 from 6.8 percent to 9.2 percent. On balance, the statistics indicated that the consolidated BHC did not reduce access to credit in LMI and minority census tracts after FRB approved its BHC application in 1996. In addition to those named above, Joan M. Conway, Rachel M. DeMarcus, Nancy Eibeck, Christopher C. Henderson, Sindy Udell, and Tonita G. Woodson made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed large bank holding company mergers and the impact of such mergers on low- and moderate-income (LMI) areas, focusing on: (1) the Federal Reserve Board's (FRB) legal responsibilities in assessing Bank Holding Company Act of 1956 (BHC) mergers for Community Reinvestment Act of 1977 (CRA) performance; (2) FRB's process for assessing the CRA performance of six large BHC merger applicants, including how FRB addressed the principal public concerns related to the CRA performance; and (3) the premerger and postmerger mortgage lending in LMI and minority communities for three large BHC mergers. GAO noted that: (1) in acting on a BHC merger application, FRB must consider the convenience and needs of the community to be served under the BHC Act and take into account the record of the relevant depository institutions under CRA; (2) neither the BHC Act nor CRA, or their legislative histories, provide guidance on how FRB is to take these factors into account when considering a BHC merger application; (3) the depository institutions' primary federal regulators have developed guidance for their assessments of a depository institution's CRA performance; (4) however, FRB has not developed guidance on how it evaluates the CRA records of the merging BHCs; (5) for the six BHC merger applications that GAO reviewed, FRB attempted to balance the regulators' ratings of the depository institutions' CRA performance and information presented through public comments that raised concerns with the institutions' CRA records; (6) all of the bank subsidiaries included in the six mergers had satisfactory or outstanding performance ratings in their most recent CRA examinations; (7) the principal CRA concerns raised by commenters included insufficient home mortgage lending, insufficient small business lending, and branch closures in LMI areas; (8) FRB analyzed Home Mortgage Disclosure Act of 1975 (HMDA) and small business data to address concerns of insufficient home mortgage and small business lending, respectively; (9) FRB's consideration of branch closures was generally limited to a determination of whether the applicant had an adequate branch closure policy and its past branch closure record; (10) FRB approved all six mergers, but four of the mergers were approved with conditions for the reporting of subsequent branch closures; (11) FRB's lack of written guidance on how it addresses public comments raising CRA concerns contributed to the concerns voiced by community groups and the BHC applicants regarding the lack of transparency in the merger application process; (12) on the basis of GAO's analysis of home mortgage lending, BHC merger activity had not been associated with adverse changes in single-family home mortgage lending in minority and LMI areas in the major metropolitan areas served by the acquired BHCs for the three BHC mergers GAO analyzed; and (13) NBD Corporation's acquisition of First Chicago and Chemical Banking Corporation's acquisition of Chase Manhattan Bank have been associated with stable to increased lending in the relevant areas. |
The Military Sealift Command (MSC) provides ships for fleet support; special missions; and strategic sealift of equipment, supplies, and ammunition to sustain U.S. forces worldwide. While MSC uses a combination of government and privately owned ships to carry out this mission, all these ships have civilian crews who work either directly for MSC or for MSC’s contract operators. This report deals with contractor-operated ships, which account for 69 of the 200 ships in MSC’s fleet (see table 1.1). Our review specifically focused on 40 ships in the 5 programs where MSC awarded long-term charter contracts for 3 or more ships. These programs include maritime prepositioning ships, T-5 tankers, oceanographic survey ships, T-AGOS surveillance ships, and fast sealift ships (see fig. 1.1). MSC spends over $400 million per year to operate and maintain these 40 ships. This figure includes payments for leasing the 18 privately owned ships in the group. Maritime prepositioning ships rapidly deliver urgently needed Marine Corps equipment and supplies to a theater of operations during a war or contingency. These 13 privately owned ships are divided into three squadrons located in the Atlantic, Pacific, and Indian Oceans and carry everything from tanks and ammunition to food, water, and fuel. Each squadron can support a U.S. Marine Corps Expeditionary Brigade of 17,300 troops for 30 days. The maritime prepositioning ships were among the first ships to arrive in Saudi Arabia during Operation Desert Shield and in Somalia during Operation Restore Hope. The primary mission of the five privately owned T-5 tankers is point-to-point delivery of refined petroleum products to Department of Defense (DOD) users throughout the world. In addition, two of the tankers are equipped with modular fuel delivery systems, which allow them to refuel combatant ships at sea. At 30,000 tons displacement, the T-5 tankers are 3,000 tons larger than the contractor-operated sealift tankers that we reported on last year. In addition, the T-5s have ice-strengthened hulls and are approximately 10 years newer than the sealift tankers. During Operations Desert Shield and Desert Storm, MSC tankers provided fuel to naval fleet units operating in the Red Sea, the Persian Gulf, and the Gulf of Oman. The mission of the eight government-owned fast sealift ships is to provide rapid surge capability to U.S. armed forces throughout the world. They are the fastest roll-on/roll-off cargo ships in the world and are designed to carry bulky Army equipment such as tanks and helicopters. Combined, the eight ships can carry almost a full Army mechanized division. The fast sealift ships are normally maintained in a reduced operating status, with skeleton crews who perform preventive and corrective maintenance and basic operational checks. All eight ships are assigned to Fast Sealift Squadron One, in New Orleans, Louisiana, and they can be activated and underway from ports on the U.S. East and Gulf Coasts in 96 hours. Each of the fast sealift ships made up to seven trips to Saudi Arabia during Operations Desert Shield and Desert Storm. They were also involved with Operation Restore Hope. The mission of 7 of the 10 government-owned T-AGOS ships is to locate and track submarines. The remaining three have been converted to do counterdrug missions. These ships are homeported in Little Creek, Virginia, and Pearl Harbor, Hawaii, and are monitored by MSC field organizations located at these homeports. The T-AGOS ships operate towed array sensor systems to gather submarine acoustical data, especially to locate new and quieter submarines. The mission of the four government-owned oceanographic ships is to support worldwide oceanographic survey programs with acoustical, biological, physical, and geophysical research. Their precision sonar systems permit continuous charting of a broad strip of ocean floor. The research conducted by these ships helps to improve the Navy’s undersea warfare and enemy ship detection capabilities. MSC’s contract operators are tasked with providing personnel, equipment, tools, and supplies to maintain MSC’s ships. They use three different levels of maintenance and repair to keep MSC’s ships operational. The first level of maintenance and repair is performed by the ship’s crew. It includes preventive maintenance and minor mechanical and electrical repairs. This work may be done during regular or overtime hours, and it may or may not be reimbursable under the terms of the applicable contract. The second level of maintenance and repair is industrial assistance, which is done by subcontractors. This work is beyond the capability of the ship’s crew but does not require an overhaul. The subcontractors may actually maintain or repair the ship’s equipment, or a technical representative may provide expertise to the ship’s crew. Industrial assistance is usually reimbursable, either directly or through a budgeted system of payments. Overhauls are the third level of maintenance and repair. They can be scheduled, as required by Coast Guard regulations, or unscheduled, for example, to repair a damaged propeller. Since none of the MSC contract operators we reviewed function under firm fixed-price contracts, overhauls are directly reimbursable. The Ranking Minority Member of the Subcommittee on Oversight of Government Management and the District of Columbia, Senate Committee on Governmental Affairs, asked us to examine the Military Sealift Command’s contractor-operated ship programs. Specifically, we determined whether MSC has adequate management controls (1) to oversee contractors and prevent abuses and (2) to ensure contractual requirements are being met. To determine whether MSC has adequate oversight of the maintenance and repair work done on its contractor-operated ships, we reviewed MSC’s engineering and maintenance and repair instructions, files, and manuals, including the Engineering Operations and Maintenance Manual. We also reviewed maintenance and repair invoices, visited a sample of ships, and interviewed responsible MSC personnel. We used the ships’ operational schedules to visit ships that were about to complete an overhaul. For four of the five programs we were able to visit a ship that was in for overhaul, but this was not possible for the T-5 tankers. Therefore, we visited a tanker that was in its full operational status. (App. I lists the ships that we visited.) During our ship visits, we interviewed crew members, contractor and shipyard officials, MSC field personnel, and Coast Guard and American Bureau of Shipping inspectors. We visited several fast sealift ships because they were all located at the same port. To determine MSC’s effectiveness in establishing and administering contract requirements, we reviewed the contracts for each of the ship programs and compared and contrasted the requirements contained in those contracts. We then discussed the contract differences with cognizant MSC officials to determine why the differences existed and to determine what, if any, standardized procedures these officials used to establish and administer program requirements. We also reviewed numerous MSC instructions dealing with funding, billing, and invoice certification. We reviewed the Department of Defense’s National Industrial Security Program Operating Manual and MSC’s security and crew qualification files to verify the suitability of the crew members on MSC’s contractor-operated ships. To determine the effectiveness of MSC’s current organizational structure, we met with various MSC officials and discussed their responsibilities with regard to MSC’s contractor-operated ship programs. We also reviewed MSC’s Standard Operating Manual, the draft proposal “Reinventing MSC,” and the MSC Commander’s June 1, 1995, update to the reinvention proposal. We then discussed the reorganization initiative with MSC’s current program managers. We did not address this area in depth because MSC’s reinvention management team and its working groups had not developed the program management organization’s structure by the time we completed our audit work. We conducted our work between July 1994 and August 1995 in accordance with generally accepted government auditing standards. An ongoing joint investigation by the Federal Bureau of Investigation and the Naval Criminal Investigative Service has led to guilty pleas by four former employees of MSO, Inc., an MSC contractor that operated 10 oceanographic vessels. The investigation revealed that these employees had fraudulently altered overtime records of other MSO employees (crew members), changing nonreimbursable overtime charges to overtime charges that are reimbursable. It is estimated that these fraudulent overcharges amounted to millions of dollars during a 3-year period. This case shows that oversight and basic internal controls are fundamental for any entity to ensure that payments are made accurately and correspond to goods and services actually received. During our review of MSC’s contractor-operated ship programs, we found that those who approve and pay bills do not verify that MSC has received the goods or services it is paying for. Part of the reason for this practice is a disconnect between headquarters-level invoice reviewers and field-level personnel, whose main concern is the operation but not the cost of the ships’ repair. In fiscal year 1994 alone, MSC spent $93.8 million to maintain and repair the ships in the five contractor-operated programs we reviewed. Given the large amounts of money spent on maintenance and repairs, it is imperative that MSC have effective controls over these expenditures. MSC lacks controls in three general areas: verification of crew-performed repairs, review of invoices for subcontracts, and oversight of repair work performed during overhauls. Though MSC’s Comptroller is responsible for coordinating MSC’s internal control program, he does not have the authority to ensure that MSC has a sufficient system of internal controls that is being adhered to. For three of MSC’s contractor-operated ship programs, MSC has included in its contracts predetermined dollar amounts for crew-performed minor repairs that are to be done as part of the contracts’ fixed price. According to the contracts, these predetermined amounts, or “minor repair thresholds,” can be met in three ways. Contractors can apply toward the thresholds (1) overtime and straight time performed by extra crew (beyond those normally required), (2) overtime by the regular crew performing minor repairs, and (3) industrial assistance (work done by subcontractors, not by the ships’ crews). Contractors are to report how they meet their thresholds in minor repair reports. After contractors meet these minor repair thresholds, they can be reimbursed by MSC for all minor repairs. According to the contracts, the cleaning of the ship and preventive maintenance are part of the fixed price. They are not to be included in the contractors’ minor repair reports. In our review of minor repair reports, we found that, because of either inadequate supporting documentation, inadequate review, or both, contractors were meeting their thresholds in ways that are not allowed by the contracts or listing the same jobs more than once. Contractors for these three programs were essentially overstating their minor repair reports in the following ways: The contractor for one ship program was including in its minor repair reports the straight time hours of its regular crew. The contractor for a second program was including cleaning jobs in its minor repair reports. The contractor for a third program was listing the same jobs twice in its minor repair reports. For all three programs, the contractors were not submitting supporting documentation that matched their minor repair reports. According to an MSC instruction, proper knowledge of receipt or disposition of goods/services during the invoice certification process will reduce the chances of fraudulent claims being paid. However, MSC reviews minor repair reports and invoices for over-threshold repairs without adequate supporting documentation to show that work was done. Contractors for two of the three programs had been paid by MSC for over-threshold repairs. As of October 10, 1995, one of the contractors had received $685,946 from MSC for over-threshold repairs for fiscal years 1991 through 1995. MSC paid a second contractor $741,360 for over-threshold repairs for fiscal year 1994 alone. At the end of our review, MSC had not yet calculated whether the contractor for the third program had met or exceeded its minor repair thresholds. MSC had no plans to recover amounts for jobs that should not have been included as minor repairs. The contract operator for the first of the three programs we discussed above included in its minor repair reports the straight time hours of its regular crew, but at the end of the 5-year contract period, MSC was not aware of this practice. MSC had never requested or reviewed the complete supporting documentation for the contractor’s minor repair reports during the 5-year contract period that would have uncovered this practice. For the life of the contract, the contractor reported nearly $6 million in crew-performed repairs in their minor repair reports. Of this amount, MSC reimbursed the contractor $685,946 for over-threshold minor repairs. MSC’s contract allows its contractor to apply to the minor repair threshold repairs done by the ship’s regular crew while on overtime but not during straight-time work hours. Because MSC does not require the contractor to submit supporting documentation, however, it has no proof that the contractor has not manipulated the reporting of overtime. Field staff for this program told us that, at a recent meeting with MSC headquarters personnel, they had recommended that the contractor be required to submit crew overtime sheets as supporting documentation for its minor repair reports. However, MSC headquarters personnel have taken no action in response to this recommendation. For this program, we requested supporting documentation from the contractor for one ship’s minor repair report, which totaled $25,859 and covered about 5 months. We reviewed this documentation to verify that the crew had actually listed this overtime work on their timesheets. We found that for this minor repair report, $8,406 of repairs had been performed by the ship’s regular crew during straight-time hours. Another $860 was unsupported by crew overtime sheets. When we disclosed our findings to program officials, they stated that they were unaware that the contractor was not complying with the contract and said that they would investigate the matter further. It is particularly important that MSC fully review supporting documentation for the minor repair reports because the Naval Criminal Investigative Service has found erroneous overtime documentation practices on the part of ship contract operators. These practices involved (1) ship officers’ fraudulent rewriting of crew members’ overtime sheets, (2) the contractor’s application of nonreimbursable work toward the minor repair thresholds, and (3) the doublebilling of MSC for the same hours of work. During our review, we also found instances of doublebilling and the application of nonreimbursable work toward minor repair thresholds. Another contractor was including cleaning jobs, which are nonreimbursable, in its minor repair reports. MSC did not require documentation that would have allowed it to verify that the contractor’s crew had actually done the work or that the work was in fact minor repairs, rather than cleaning and maintenance work. The contracting officer told us that MSC did not request this documentation because the paperwork was excessive and burdensome for MSC. According to the contract, the cleaning and maintenance of the ship are paid for in the fixed-price portion of the contract. Cleaning and maintenance work is not to be included in the contractor’s listing of minor repairs; nor is it to be billed as a reimbursable expense. While the contract contains a list of sample minor repairs, it does not contain a similar list of cleaning jobs. We asked contracting officials whether such lists might clarify what jobs can and cannot be claimed as minor repairs and therefore be reimbursable. They told us that the contract was already too specific and that adding such a list would be adversarial to the contractor because distinguishing between cleaning and minor repairs is by nature subjective. During our review, we requested that the contractor for this second ship program provide supporting documentation for one of its minor repair reports. We reviewed this documentation for three ships for a 3-month period. We traced the contractor-generated list of minor repairs back to original timesheets filled out by the crew members. For one ship, we found that of the $15,897 the contractor claimed to meet its minor repair threshold, $3,202 (or 20 percent) was unsupported by crew overtime sheets. In addition to this unsupported work, we found that at least 24 of the 131 jobs listed as minor repairs appeared to be cleaning or preventive maintenance. That is, 24 jobs—which cost $2,445—were for wiping up oil; defrosting the icebox; cleaning the galley, oven, staterooms, and pantry; lubricating hoses; rotating stores; waxing floors; sweeping the deck; entering timesheet data; and other similar cleaning and preventive work. When MSC’s invoice reviewer approved this list of minor repairs, he deducted only one job, which entailed waxing the decks. This deduction was for $487.65. For the other two ships’ lists of minor repairs, we found that the contractor had similarly claimed cleaning and maintenance jobs as minor repairs. These included sweeping, picking up trash, removing dust and dirt, stripping and waxing decks, and cleaning the galley and a shower, among others. For these two ship reports, the MSC reviewer made no deductions at all. In our review of minor repair reports for a third ship program, we found numerous instances in which the supporting documentation did not match the jobs listed in the minor repair reports. For example, we found instances in which the contractor had listed the same jobs twice. In addition, we found instances in which the contractor had claimed work done by individual crew members, but its minor repair report did not include timesheets as documentation to verify that these crew members were actually aboard the ships and had done the work as claimed. MSC personnel for this ship program review minor repair reports for “engineering content only.” That is, they review these reports only to verify that the costs are reimbursable under the contract, not to verify the accuracy of the reports or to take steps that would detect duplicate listings. Not only is MSC’s oversight of crew repairs inadequate, but its review of invoices for subcontracted work (second-level maintenance) is insufficient to prevent excessive payments by MSC. First, MSC does not uniformly require contractors to provide supporting documentation with their invoices that would indicate that prices are fair and reasonable. Second, MSC headquarters invoice reviewers generally do not rely on available field staff to verify that the subcontracted work was done or that it was reasonably priced. Included in all of MSC’s contracts for the operation of its ships are clauses stating that the government is obligated to pay only the costs it deems are “fair and reasonable.” In only one of its contracts, however, does MSC include requirements for the contractor to submit documentation with its invoices that would allow the invoice reviewer to determine whether the price of the goods or services is fair and reasonable. In this one contract, MSC states that without such documentation, it will not reimburse the contractor. According to MSC, its subcontract review for one contractor was heightened because this contractor’s purchasing system is not reviewed by the Defense Contract Management Command (DCMC), which is part of the Defense Logistics Agency. DCMC declined to review this contractor’s purchasing system because the dollar value of its subcontracts was so low. MSC stated that for all but this one contract, MSC has required the contractors to maintain DCMC-approved purchasing systems. We analyzed the April 1995 DCMC audit of a contractor for two of the ship programs in our review. The DCMC auditors evaluated, among other things, whether the contractor had awarded subcontracts competitively and performed adequate price analysis and negotiations. At the end of its review, DCMC approved the contractor’s purchasing system. However, it noted several weaknesses in this system and recommended corrective action. For example, DCMC found that only 54.5 percent of the contractor’s purchase orders had been awarded competitively. For purchase orders under $25,000, only 48 percent had been awarded competitively. Finally, DCMC found that for awards without competition, 63 percent of the purchase order files neglected to include detailed evidence of effective price analysis or negotiation. Among the agency’s recommendations was that the contractor “assure that effective price analysis is performed for each applicable single-sole source purchase order over $10,000 and to a lesser degree those under $10,000.” The contractor notified MSC that it intended to implement DCMC’s recommendations. Despite the weaknesses revealed in the DCMC audit of this contractor, MSC has not adjusted its oversight of the contractor’s awarding of subcontracts under $25,000. On the basis of what is submitted by the contractor to support subcontract invoices, the MSC invoice reviewer has no way of knowing whether the subcontract was awarded competitively or not. Neither does the supporting documentation show whether or how the contractor determined that prices were fair and reasonable. We asked the invoice reviewer for this program whether he had ever made deductions based on his determination that the price charged was not reasonable. He said that he only remembered questioning the reasonableness of price in two cases, in 1991 and 1992. One involved whether a technical representative had flown first class or coach, and the other involved whether the technical representative had rented the appropriate rental car. In neither case did the invoice reviewer determine that a deduction was necessary. We believe that these cases involved determining allowability of costs rather than reasonableness of costs. That is, under the terms of MSC’s contracts with its ship operators, government regulations on travel apply. Allowing a technical representative to fly first class and drive a luxury rental car would violate the terms of MSC’s contracts. On the other hand, during our review of invoices for the ship program that does require documentation of fair and reasonable prices, we found that invoices consistently included evidence of competitive bidding or a justification for a sole-source subcontract. We also found several cases in which an MSC field unit had deducted amounts from the contractor’s invoices for inadequate documentation. For example, the field unit had deducted amounts for repairs and for repair parts because documentation did not indicate that the charges were fair and reasonable. We also saw a case in which the field unit deducted fax and telephone charges because the contractor had not submitted a statement explaining the nature of calls made to ensure the calls had been made for official government business. By contrast, for the contractor whose subcontracting weaknesses were cited by DCMC, we saw an invoice for $1,456.73 for telephone calls for a 3-day period. The invoice contained no indication of whether any of these calls were for official government business, yet the invoice was approved for payment. In our review of this same contractor’s invoices, we found an invoice whose price appeared excessive. This invoice was for $3,560 to “provide labor, tools and material as necessary to replace twenty (20) lampshades . . . relamp and repair as necessary.” The invoice included no evidence of whether this work had been awarded competitively, why it had not been done by the ship’s crew, or how extensive the work was. Before approving this invoice for payment, the MSC invoice reviewer did not seek further information from the contractor. When we asked for an explanation of this invoice, the invoice reviewer said that he did not know whether the lamps had been repaired or whether the lampshades had simply been replaced. After we requested supporting documentation from the contractor on this invoice, we found that MSC had paid $260 per lamp to repair 10 lamps and replace their lampshades, when it could have purchased new lamps for $210 each (excluding the costs of installation). Work on the other 10 lamps was less extensive, ranging from simply replacing the lampshades to replacing the toggle switches and/or modifying the lamp bases. (See fig. 2.1 for an example of the type of lamp repaired.) We also found that the ship’s crew includes a qualified electrician whose overtime labor rate is about half that charged by the subcontractor. On another ship in this program, lampshades were replaced by the third assistant engineer, also at an hourly overtime rate about half that charged by the subcontractor. The master and the chief engineer on this ship stated that they could see no reason to use subcontractors to repair lamps because it is such a simple task and fully within the crew’s capability. MSC headquarters personnel who review invoices do not know whether goods have been delivered or services provided, as dictated in MSC invoice certification instructions. In their review of invoices, headquarters personnel are ensuring that what is charged by the contractors is allowable under the terms of the contract. However, they are not ensuring that parts were actually delivered or work was actually done. In effect, these reviewers are relying heavily on the integrity of the contractors and are essentially approving all invoices for items or services allowed by the contract. Field personnel, who could be used to personally verify that work has been done at reasonable costs, are primarily concerned with the condition and operation of the ships. A senior-level official from one field unit told us that when he wants something fixed, cost is not his main concern. On one program, MSC field personnel do not see invoices reflecting the cost of work performed as a result of their recommendations. In two of the five contractor-operated ship programs, field staff are located near the ships and visit them regularly. These personnel could be used to verify that work billed MSC has been done and is reasonably priced. They could easily check work performed on the ships as part of their routine inspections. For one program, field staff are already reviewing invoices. The MSC contracting officer has no visibility over many large-dollar repair expenditures for one ship program. MSC’s contract with its contract operator on this program requires that the contractor first obtain MSC approval before subcontracting for industrial assistance that costs more than $25,000. This requirement is intended to help MSC ensure that it receives fair and reasonable prices for large repair jobs and that the work is needed. Because the contractor for this program breaks large jobs down into multiple smaller ones, it is evading the contractual requirement to obtain the contracting officer’s prior approval. Contractor officials told us that they routinely split jobs into segments because these ships needed to be ready to go to sea with 4 days’ notice. They said that they split jobs into pieces because obtaining the MSC contracting officer’s approval delays payment to the subcontractor. During our review, we found that MSC has known about this practice since 1990. In a 1990 memorandum to MSC’s Contracts and Business Management Directorate the former director of engineering at MSC stated that “although Contractors are required to obtain Contracting Officer approval for subcontracts in excess of $25,000, there are many instances where Contractors have instituted procedures that evade compliance.” These procedures, he said, included issuing multiple work orders, each less than $25,000, to a single subcontractor. During our review, we asked MSC officials whether they had taken any action to prevent contractors from issuing multiple work orders and thereby evade the requirement to seek MSC’s prior approval. They said they had not. In one case, the contract operator split a job totaling $143,740 into 18 separate jobs, each under the $25,000 threshold. This work was for ship cleaning that was done by the same subcontractor on the same ship over a 3-month period. After we requested that the contractor provide us with evidence that this work had been competitively awarded, we found that the contractor had obtained quotations from three subcontractors on the price per square foot for cleaning the ship. The contractor awarded the work to the lowest bidder based on a single price quotation. It then split the job into 18 smaller ones involving the cleaning of different parts of the ship. In another case, this same contractor submitted 71 separate invoices totaling $202,294 for welding-related work done by one subcontractor on one ship over a 4-month period. In many cases, multiple invoices were submitted to MSC on the same day. For example, 9 invoices were submitted on December 2, 1994; 12 were submitted on December 30, 1994; 18 were submitted on January 5, 1995; and 12 were submitted on February 10, 1995. Despite this pattern of billing, the MSC person responsible for reviewing these invoices said that he was not aware of the contractor’s practice of splitting large jobs into smaller ones. During our review, we asked the contractor to provide documentation showing which of these 71 jobs had been competitively bid or justified as sole source. He was able to show that only 30 had been awarded competitively and that 7 had been awarded sole source because they were related to competitively bid work. The contractor did not supply documentation on the other 34 jobs. MSC headquarters personnel review overhaul work packages and discuss them in detail with representatives from the contract operators’ engineering staffs before overhaul subcontracts are solicited and awarded. However, even though a ship’s overhaul can cost MSC up to $6 million, MSC does not always have an MSC representative on-site during the overhauls to ensure that work contained in these work packages is actually done and that unforeseen repairs not specified in overhaul contracts are completed or are reasonably priced. This lack of assurance is due at least in part to the fact that MSC has no agencywide requirement for its representatives to be present during ship overhauls. This presence during an overhaul enables a representative of MSC to observe the condition of items of equipment when these items are opened and inspected and to determine the extent of needed repairs. In addition, the presence of an MSC representative enables MSC to monitor the extent of the repairs to prevent unneeded work. When an MSC representative is not present during an overhaul, MSC is relying entirely on the integrity and professionalism of the contract operator to protect the government’s interest. Even when MSC representatives are present, the amount of involvement among them, contract operators’ representatives, and shipyard personnel varies because MSC has no written guidelines governing the authority and responsibilities of its representatives. For the three contractor-operated programs whose ships are owned by the government, we found that some MSC representatives significantly contributed to the contracting officer’s ability to enforce the terms of MSC’s contracts and to ensure that repairs were made in the best interest of the government. Other MSC representatives’ contributions were not as significant. Even though an MSC presence during overhauls helps to protect the government’s interest, having an MSC representative on-site did not always ensure that MSC obtained negotiated prices on change orders. During one overhaul, we found that for $271,755 of a total $544,135 (about 50 percent) in change orders, the contract operator’s and the shipyard’s estimates were identical. For $427,111 of this change order work (about 78 percent), the “negotiated” prices between the shipyard and the contract operator were the shipyard’s estimated prices. The lack of clear written guidance on the authority and responsibilities of the MSC representative contributed to MSC’s failure to obtain negotiated prices on this overhaul. Because the MSC representative did not independently estimate change orders, MSC had no assurance that it did not pay excessive prices. During this overhaul, the MSC representative was simply providing the administrative contracting officer with a statement that funds were available for the work. He was not preparing independent government estimates. Such independent estimates form the basis on which the government can challenge prices charged by the shipyard. MSC does not have written guidance to address the oversight of work done by its contract operators’ “extra” crew members during overhauls. During overhauls, MSC’s ships maintain skeleton crews to monitor alteration, maintenance, and repair work and to provide security for the ships. However, MSC sometimes authorizes its contractors to retain additional crew members during overhauls when the contractors can provide justification for the special work requiring their retention. MSC has no written guidance regarding oversight responsibilities for this work, and it has not established procedures for taking deductions if the authorized work is not completed. An MSC representative for one ship program told us that he routinely inspects the work of additional crew members during overhauls. However, the benefit of these inspections is questionable for two reasons. First, MSC does not use these inspections as a basis for taking contract payment deductions. The MSC representative who actually inspects the approved work items does not receive or review the bill for this work, and no one at MSC asks for the results of his inspection when the bill for the work is reviewed. Second, MSC does not require the contractor to obtain prior approval when changing the work items used to justify the extra crew members. The contracting officer for this program told us that she does not see why the contractor cannot deviate from the special work items it submitted as justification for its extra crew members. We visited one ship from this program on the last day of its overhaul. During that visit we observed, as did an MSC representative, that many of the work items used to justify the ship’s extra crew had been only partially completed or not completed at all. According to the MSC representative, this was not an isolated case, since on other overhauls he found that the work used to justify the extra crew had not been completed. Later that day we were told by the ship’s master and chief mate that the work items had changed, and we were given a handwritten list of changes that had not been approved by MSC. Until that time, the MSC representative had not known what jobs the extra crew members were actually doing. At the end of our review, MSC had still not received a bill for this work, 10 months after the completion of the overhaul. As we discuss in this chapter, MSC’s internal controls to prevent the possibility of contractor fraud and abuse are weak in many cases. MSC’s Comptroller is responsible for the coordination of MSC’s internal control program. However, according to the MSC Comptroller, he does not have direct authority to ensure the sufficiency of these controls or their implementation. In 1990, Congress mandated governmentwide financial management reform by enacting the Chief Financial Officers (CFO) Act (P. L. 101-576). This act was based at least in part on the finding of Congress that “billions of dollars are lost each year through fraud, waste, abuse, and mismanagement among the hundreds of programs in the Federal Government.” The Secretary of Defense has recognized that the CFO Act is a vehicle for improving DOD’s financial operations. He has therefore directed that senior managers throughout DOD play a more active role in identifying, reporting, and correcting poor internal controls. This does not appear to have occurred at MSC. MSC’s oversight of ship repairs for its contractor-operated ships is inadequate to prevent overcharges. MSC lacks basic internal controls that would help to ensure that MSC is paying reasonable prices for work that is actually being done. Specifically, MSC lacks basic internal controls in its supervision of overhaul work, in its verification of crew-performed repairs, and in its review of invoices for subcontracts. Furthermore, though MSC’s Comptroller is responsible for coordinating its internal controls, this person has no authority over internal controls throughout the agency. We recommend that the Secretary of Defense direct the Commander of MSC to take the following actions: Institute MSC-wide procedures to ensure that contractors are (1) accurately reporting how they meet contract-defined thresholds for crew-performed minor repairs, (2) submitting adequate documentation with invoices for MSC to determine that prices are fair and reasonable, and (3) obtaining prior MSC approval for subcontracted work above thresholds required by the contracts. When practical, require that MSC representatives verify, through spotchecks, that minor repairs and industrial assistance paid for by MSC have actually been done and recommend deductions if necessary. These spotchecks could be done by MSC personnel as part of their normal inspections. When practical, require an MSC representative to verify, based upon physical observation, the satisfactory completion of work performed at various stages of overhauls of MSC contractor-operated ships. Provide written guidance defining the roles, responsibilities, and authority of MSC representatives in protecting the government’s interests during overhauls and other major repair work. Consider expanding the responsibilities of MSC’s Comptroller or creating a new position for a financial management expert to oversee the implementation of the above recommendations. If a new position is created, this person should report directly to the Commander of MSC. In addition to the existing duties of the Comptroller, this person would be responsible for setting minimal internal controls for all aspects of financial management throughout MSC and overseeing the implementation of these controls. The responsibilities of this position would be similar to those of a Chief Financial Officer established under the CFO Act of 1990. In official oral comments, DOD partially concurred with the report and generally agreed with our recommendations. However, DOD generally disagreed with the details of the report and the conclusion that internal controls are weak. DOD did agree that there are opportunities for further improvements in the internal controls applied to contractor operation of MSC ships and said it has already implemented remedial measures. DOD also stated that in view of the unusual procurement situations highlighted in the report, the Commander of MSC is focusing additional attention on risk analysis and design of appropriate internal controls. We continue to believe, based on the findings discussed in this chapter, that MSC does not have an adequate system of internal controls in place. Recent fraudulent practices of a former MSC contractor and the continuing investigation by federal law enforcement agencies into MSC operations support our conclusion that MSC’s internal controls are inadequate. Effectively managed programs have three things in common. First, program requirements are carefully and systematically established based on past experience and input from customers and knowledgeable people throughout the organization. Second, responsibility for monitoring program performance and ensuring that programs meet the established requirements is clearly delineated. Third, program managers are constantly looking for ways to improve program performance and to reduce costs. During our review, however, we found that MSC does not have the organizational structure or the standardized procedures necessary to effectively manage its contractor-operated ship programs. MSC does not have guidelines for systematically establishing personnel requirements such as citizenship and security requirements. Neither does it systematically compare contractual requirements with contractors’ performance in obtaining security clearances and trustworthiness evaluations for crew members. Finally, MSC has no formal system to coordinate ideas to improve the contractors’ performance or reduce the programs’ costs. Because its own management controls are weak, MSC relies heavily on its operating contractors to prevent contract abuses. The dangers of such a heavy reliance on contractors have been demonstrated through MSC’s past experiences. For example, a now defunct ship management company billed and collected payments from MSC for fraudulent overtime aboard MSC’s oceanographic ships. In another case, MSC management’s poor oversight resulted in the deteriorated and unsafe condition of its sealift tankers and in the crewing of these ships with significant numbers of personnel who had been convicted of felonies. We reported on the condition of the sealift tankers and their crews in a 1994 report. MSC’s fragmented lines of organizational authority represent a significant impediment to sound management controls. MSC recognized the problems caused by its current organizational structure and planned to begin implementing a new program management structure on October 1, 1995. Under MSC’s new structure, accountability that was previously divided among various MSC headquarters departments and field levels will reside with a single individual, the program manager. Despite the fact that MSC’s contract provisions can affect a ship program’s operation for 20 years or more, MSC does not have standard procedures to develop personnel requirements in its contracts. The personnel from MSC’s Operations Office, who are responsible for coordinating contract requirements with the ship’s sponsors, told us they do not follow checklists or standard procedures to ensure that important personnel requirements are not overlooked. Neither do they routinely consult existing contracts for other programs prior to the award of new contracts. As a result of this lack of standard procedures, MSC failed to review the resumes of some ships’ crews, and some ships did not have U.S. citizenship, security clearance, or trustworthiness requirements for their crews. MSC has no guidelines to ensure that crew qualification requirements are consistently established. Qualified crew are critical, especially in situations such as underway refueling, where the chance of a collision at sea is significantly increased. Therefore, it is essential for ship owners, operators, and those who charter ships to take precautions to ensure that the crews are qualified. Although four of the five ship program contracts we reviewed require contractors to submit the resumes of key personnel to MSC for approval before the personnel are assigned to a ship, the fifth ship program’s contracts do not. An MSC official in charge of the fifth ship program told us that MSC did not need to review the resumes of crew members. He said that contractors should not crew their ships with improperly licensed crew members because they could be fined by the Coast Guard. However, for one program that required resumes, the contractor did attempt to crew its ships with improperly licensed crew members. After its review of resumes, MSC rejected two of the contractor’s nominees for master positions because they did not have the proper licenses and had never served as chief mates on the program’s ships. MSC’s lack of standard procedures contributed to a routine citizenship requirement clause being left out of the contracts for one contractor-operated ship program. While contracts for four of the ship programs we reviewed included clauses requiring all crew members to be U.S. citizens, the fifth program did not include this clause. The contracts for this fifth program were signed in October 1982 and April 1983, just months after one of the other programs had signed contracts requiring all crew members to be U.S. citizens, in August 1982. Military and civilian officials in MSC’s Pacific and Far East Offices expressed concern that all personnel aboard T-5 tankers were not U.S. citizens, and following the Persian Gulf War, MSC tried to add citizenship clauses to the T-5 contracts. When the contractor refused, MSC dropped the issue. The contract for this program still does not require all its crew members to be U.S. citizens, and only Coast Guard regulations limit the number of foreign nationals on these ships. While MSC’s contracts for its other four contractor-operated ship programs require all the contractors’ personnel assigned to ships to be U.S. citizens, they do not require the contractors’ shore personnel to be U.S. citizens. MSC field personnel for one program said that MSC’s failure to include this clause for shore personnel was an oversight on MSC’s part. These field personnel said that the contractor, aware of this loophole, had proposed a port engineer who was not a U.S. citizen. However, this person was disapproved because a foreign national cannot hold a security clearance and thus would not have been able to deal with any ship maintenance or repair work that involved classified material. Contracts for all five of the ship programs we reviewed require at least some security clearances for the ships’ crew members. However, no one at MSC has established guidelines for the inclusion of security clearance requirements in contracts. As a result, a key contract requirement was inadvertently left out in one case. Four of the ship programs we reviewed had security clearance requirements in their original contracts. The fifth program added security clearance requirements during the ninth year of its contracts through contract modifications. These modifications required all corporate officers and the master, chief mate, and radio operator of each ship to have secret clearances. Although the contracts for all five ship programs require some crew members to hold security clearances, only the T-AGOS and oceanographic ships’ contracts require noncleared crew members to pass trustworthiness evaluations. Some MSC officials stated that these two ship programs have more stringent requirements for trustworthiness evaluations because of their sensitive missions. However, the program manager for another program stated that security requirements for his ship program were based on the fact that the ships are subject to sabotage. Trustworthiness evaluations determine the loyalty of an individual by checking whether the individual has committed any prior act of sabotage, espionage, treason, or terrorism. For the three ship programs that do not require trustworthiness evaluations for their unlicensed crew members, MSC does not collect or review any background information about these crew members. The Coast Guard does require mariners working aboard U.S. vessels to hold merchant mariner documents that include a criminal record check every 5 years. However, MSC does not spot-check these documents. If MSC ships are subject to sabotage, trustworthiness evaluations should be required of all its ship crew members. No office in MSC is responsible for tracking trustworthiness evaluations and security clearances for MSC’s contractor-operated ship programs to ensure that contractors are complying with contract requirements. MSC’s Office of Security, Operations Office, and Operating Contracts Division are involved with the security clearances and trustworthiness evaluations of ship crews, but communication among these offices is poor. As a result, MSC cannot ensure that its crews are trustworthy or appropriately cleared, and untrustworthy individuals may be assigned to ships with sensitive missions for extended periods of time before they are removed. Though we did not document any unauthorized disclosures of classified material by contractor employees, we did find that 300 crew members who were later found to be untrustworthy had been assigned to MSC’s ship programs for the time it took to conduct the trustworthiness evaluations. In one case, it took 23 months to determine that a crew member was untrustworthy. Three separate offices in MSC headquarters have distinct roles in maintaining information on contractor-operated ship crews. The Operating Contracts Division and the Operations Office maintain crew lists. The Office of Security maintains a list of trustworthy contractor personnel. However, no one from any of these three offices compares these lists to ensure that all crew members are trustworthy. In addition, the Office of Security does not track the length of time between the date the contractor submits the crew members’ original paperwork to MSC and the date MSC completes trustworthiness evaluations. As a result, crew members who may sail aboard MSC contractor-operated ships as soon as their trustworthiness paperwork has been submitted may be found much later to be untrustworthy. Over the last 8 years, MSC’s Office of Security has completed trustworthiness evaluations for approximately 2,900 of the crew members on its contractor-operated ships. It has found that 300 of these crew members did not meet the trustworthiness criteria contained in the Navy’s security instruction and thus had to be removed from MSC’s ships. Because the Office of Security destroys its original records after it makes trustworthiness determinations, we could not determine how long these 300 untrustworthy individuals had been assigned to the MSC ships with sensitive missions before they were removed. We were able, however, to determine how long it took to do 29 evaluations. We did this by matching a contractor’s active crew list to MSC’s trustworthiness file. Until MSC makes its trustworthiness evaluation, the contractor’s active crew list contains the dates the crew members’ forms were submitted. Once the evaluation is made, these original dates are lost because they are changed to the date of the completed evaluation. Therefore, we had to match an old crew list (containing the dates the forms had been submitted) to recently completed evaluations in MSC’s trustworthiness file. Eight of the 29 evaluations were completed within 4 months. However, in three of the five cases in which MSC determined that the crew members were untrustworthy, the evaluations took 10 or more months to complete (see table 3.1). During the intervening months, the untrustworthy crew members were eligible to sail on MSC ships with the most sensitive missions. Crew members who require security clearances are not assigned to MSC’s ships until their clearances have been completed. Even though more than 10 percent of the crew members MSC evaluated over the last 8 years were found to be untrustworthy and were removed from its ships, trustworthiness evaluations are still processed slowly. For example, when we matched one contractor’s August 1994 crew list to MSC’s trustworthiness evaluation file (updated through March 1995), we found that MSC had completed 255 of the 341 evaluations required for the contractor’s crew members, but it had not completed the remaining 86 evaluations (see table 3.2). The trustworthiness evaluation forms for 21 of the 86 crew members were submitted in 1994. However, the forms for one crew member had been submitted in August 1989, and MSC had still not completed its evaluation in March 1995, almost 6 years later. In addition, four of the contractor’s shore personnel had access to the ships with sensitive missions, even though they did not have security clearances and were not required by the contract to undergo trustworthiness evaluations. MSC’s trustworthiness evaluations for crew members on ships in MSC’s other sensitive program were delayed as well. We reviewed January 1995 crew lists for all four ships in this program and found that MSC had completed only 39 of the 94 required trustworthiness evaluations. While we did not document any unauthorized disclosures of classified material by the employees of MSC’s contract operators, we found that MSC is vulnerable to unauthorized disclosures because it is not consistently enforcing requirements for its security clearances. All of MSC’s contract operators must obtain their required clearances from the Defense Industrial Security Clearance Office, but MSC does not monitor all its contract operators to ensure that they are complying with this requirement. For one program, MSC keeps lists of the contractors’ cleared personnel in three different places—the Office of Security, the Operating Contracts Division, and the Operations Office. However, for another program, no one at MSC keeps track of the contractor’s cleared personnel. There was confusion about who was responsible for this tracking, and when we interviewed personnel from MSC’s Office of Security, Operating Contracts Division, Engineering Directorate, and Operations Office, we found that none of them had documentation showing that the officers on the ships held the proper clearances. In addition, when we visited one of this program’s ships, the master told us that only he and the radio officer had secret clearances. The contract required the chief mate to have a secret clearance as well. Even when MSC does receive clearance letters from the contractors, it does not verify the clearances with the Defense Industrial Security Clearance Office or compare the clearance letters with the contractor’s active crew lists to ensure the clearance lists are complete. Therefore, MSC cannot verify that all its contractor personnel and crew members have appropriate security clearances. When we talked to MSC’s program managers, they told us that MSC does not have a formal system for them to get together, share ideas, and evaluate the costs of different contracting techniques. As a result, MSC may be missing opportunities to implement best practices. For example, the contractor-operated ship programs we reviewed used two different contracting methods to control ship maintenance and repair costs. However, no one at MSC has compared the two contracting methods to determine whether one method is more cost-effective than the other and therefore should be adopted for all of MSC’s contractor-operated ship programs. Under one method, MSC uses a yearly budget to predict the maintenance and repair costs of its T-5 tankers. The operating contractor submits a proposed budget to MSC 30 days prior to each annual operating hire period. This proposed budget is based on historical costs and planned maintenance that will be completed in the following year. Personnel from MSC’s Engineering and Contracting Directorates review the proposed budget and develop their own estimates. MSC and the contractor then negotiate a final budget through a contract modification. The contractor must submit quarterly reports that separate parts and technical representative services for 24 different maintenance and repair categories. At the end of the year, the Defense Contract Audit Agency audits the contractor’s actual maintenance and repair costs based on a stratified statistical sample of invoices. If actual costs exceed budgeted costs, MSC reimburses the contractor. If budgeted costs are higher than actual costs, the contractor credits MSC. When we reviewed one year’s records for the T-5 tankers we found that three ships were under budget, and two were over budget. The actual maintenance and repair cost for all T-5 tankers combined was within 6 percent of the budget. According to the contracting officer, because this process worked so well on the T-5 tankers, he later incorporated it into most of his contracts for the maritime prepositioning ships. In awarding contracts for three other contractor-operated ship programs, MSC uses a threshold method to control its maintenance and repair costs. This method, however, has not accurately predicted maintenance and repair costs, and it does not attempt to do so. It attempts only to set a fixed price for a portion of the repair costs. Under the threshold method, MSC sets a level of maintenance for the contractor to accomplish each month. This threshold is generally expressed in terms of a number of overtime hours of work to be done by a particular crew member—often the second engineer. The threshold method of controlling costs offers less flexibility than the budget method used on the T-5 tankers and maritime prepositioning ships because unlike the budget, the threshold remains constant over the life of these short-term contracts. Contractors do not always submit monthly maintenance reports, as required under the threshold method, and the level of maintenance and repair reported is rarely close to the threshold level. Consolidated maintenance and repair figures vary among programs and contractors, but the fiscal year 1994 figures for one ship program were almost twice the threshold level. The maintenance and repair cost for each ship in that program was 59 to 175 percent more than the ship’s threshold level. The second program was 13 percent over threshold for the contract period. MSC awarded a new contract for the third program on May 23, 1995, but as of October 10, 1995, MSC still could not determine whether the operator under the previous contract was over or under the threshold. This was largely due to contractor delays in submitting reports. While the threshold method controls costs by setting a fixed price for all work up to the threshold level, maintenance and repair work above the threshold is fully reimbursable, and the contractors are not required to obtain prior approval for this work. MSC plans to expand its thresholds in the future by including preventive maintenance, cleaning, and other work that is excluded under the current thresholds. However, if MSC does not accurately predict the costs of this excluded work and increase the threshold amounts appropriately, the contractors could quickly reach the threshold levels and then be fully reimbursed for all additional work. Until November 28, 1994, MSC had not formally designated program managers for any of its contractor-operated ship programs. However, on that date MSC’s Commander directed the head of the Operations Office to formally appoint program managers for several ship programs. As a result, two individuals from the Operations Office were designated as program managers for the five contractor-operated ship programs we reviewed. One individual was designated as the program manager for the T-5 tankers and the fast sealift ships. The other was designated as program manager for the oceanographic, maritime prepositioning, and T-AGOS ships. Since these program managers are not assigned any staff outside the Operations Office, they rely on MSC’s various headquarters and field organizations to cooperate in developing and administering their program requirements. That is, the legal, contracting, engineering, accounting, and security personnel who administer various parts of the contractor-operated ship programs are all located in different departments in MSC and report to the heads of their individual departments. Also, ship programs that are contractor-operated are not collocated but, rather, are spread out over several departments. Such an organization is not conducive to the uniform administration of contracts or to the dissemination of best practices. Ultimately, it has contributed to MSC’s failure to ensure that its contractors comply with their contracts. Specifically, MSC’s fragmented lines of authority have hindered enforcement of trustworthiness and security provisions. Some MSC personnel we talked to were very frustrated with MSC’s unclear lines of authority, especially with the chain of command for contracting issues. The contracting officer’s representative for one program told us that upper-level management provides minimal leadership and the contracting officer’s representative has little authority to act independently. Until recently, another program did not even have a contracting officer’s representative. The contracting officer for that program designated a person in the Operations Office to serve as his contracting officer’s representative on October 28, 1994. However, this person did not sign his authorization letter until August 29, 1995, the day after we had discussed our completed review with MSC officials. MSC is planning a reorganization to “clarify accountability, responsibility, and authority” for its ship programs. Under the proposed reorganization, six program managers will oversee MSC’s ship programs. Unlike the current program managers, these new program managers will have authority over staff members assigned to their programs from the field and from the Operating Contracts Division and the Engineering Directorate. MSC’s new program management structure was scheduled for implementation beginning in October 1995. MSC’s plan to designate program managers and to establish formal lines of accountability from personnel in the field and from the Operating Contracts Division and the Engineering Directorate directly to the program managers will improve communication within ship programs and should improve MSC’s ability to monitor contractors’ compliance with the terms of their contracts. However, MSC still will not have a system in place to systematically establish personnel requirements and to identify and implement best practices. The use of standardized procedures and best contracting practices is important for all ship programs, but it is especially critical for contractor-operated ship programs where a single contract may remain in effect for 20 years or more. We recommend that the Secretary of Defense direct the Commander of MSC to take the following actions: Develop and require the use of standardized procedures by program managers and their staffs whenever possible to establish personnel requirements in their contracts. As part of MSC’s upcoming reorganization, direct program managers to clarify accountability by (1) assigning a specific individual responsibility for each contract requirement and (2) periodically checking that contract provisions, such as those dealing with trustworthiness and security clearances, are correctly administered and met. Instruct program managers and contracting personnel to meet to discuss and evaluate ways to identify and implement best practices into their contractor-operated ship programs. DOD concurred with the recommendations contained in this chapter. However, it did not concur with our findings that (1) MSC does not have standard procedures to develop personnel requirements and (2) MSC has no systematic approach to identify and implement best practices. In addition, DOD only partially concurred with out findings that (1) MSC does not ensure that contractors comply with requirements for crew trustworthiness and security clearances and (2) fragmented lines of authority impede sound management. In disagreeing with the finding concerning standard procedures for personnel requirements, DOD stated that MSC evaluates lessons learned from operating contracts before issuing solicitations for new contracts. It also stated that while MSC does not require 100 percent of its tanker crews to be U.S. citizens, currently, all of them are. We maintain that MSC’s failure to require 100 percent citizenship on its T-5 tankers indicates that MSC does not always evaluate lessons learned from other ship operating contracts. In contracts signed less than a year before the T-5 tanker contracts, MSC required that 100 percent of the maritime prepositioning ships’ crews be U.S. citizens. Furthermore, in contracts signed after the T-5 tanker contracts, MSC required that all crew members be U.S. citizens on T-AGOS, fast sealift, and oceanographic ships. Although all the crew members now on the tankers are U.S. citizens, this was not the case in the past. For example, past crews have included citizens from Romania and Yemen. In addition, there is no guarantee 100 percent of the future crew members will be U.S. citizens, since that is not an MSC requirement. In disagreeing with the finding concerning best practices, DOD stated that best practices are shared, but the budgeting system used for the maritime prepositioning ship and T-5 tanker programs is not appropriate for other ship programs because the circumstances and contract terms are different. In our report, we acknowledged the differences between the T-5 tankers and maritime prepositioning ships and the rest of the contractor-operated ships we reviewed. However, these differences do not preclude the sharing of best practices between the programs. Furthermore, MSC has not done a cost comparison between the two different methods of controlling maintenance and repair costs. Although DOD partially concurred with our finding concerning MSC’s tracking of crew trustworthiness and clearances, it said that trustworthiness evaluations are done by the Defense Investigative Service and the reports should be destroyed following final action. As our report points out, trustworthiness determinations are made by MSC, not by the Defense Investigative Service. Although the Defense Investigative Service reports MSC uses during the trustworthiness evaluation process must be destroyed after a final determination is made, MSC can and should track whether or not crew members have trustworthiness evaluations. Although DOD partially concurred with our finding concerning fragmented lines of authority, it stated that lines of authority have always delineated responsibilities for contractor-operated ships. We maintain that the lines of authority and responsibility were not always clearly delineated in the past, particularly regarding contracting officers’ representatives. Norfolk, Va. Jacksonville, Fla. Norfolk, Va. Charleston, S.C. New Orleans, La. New Orleans, La. New Orleans, La. New Orleans, La. New Orleans, La. Sharon A. Cekala Joan B. Hawkins Joseph P. Walsh Michael J. Ferren Beverly C. Schladt Martin E. Scire The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Military Sealift Command's (MSC) management of its contractor-operated ships, focusing on whether MSC has: (1) adequate management controls to oversee contractors and prevent abuses; and (2) sufficient oversight to ensure that contractual requirements are being met. GAO found that MSC: (1) does not require contractors to adequately document minor repairs, crew time, or subcontracted work; (2) does not adequately verify crew-performed repairs, review subcontractor invoices, or supervise overhaul work; (3) lacks sufficient internal controls to adequately manage its ship operation contracts; (4) has no guidelines for systematically establishing personnel requirements; (5) does not ensure that contractors comply with requirements for trustworthiness evaluations and security clearances; (6) has no formal system to identify and implement best practices that could improve contractor performance and reduce costs; and (7) has acknowledged its organizational problems and plans to designate program managers and establish formal lines of accountability. |
In January 1968, oil was discovered in Prudhoe Bay on Alaska’s North Slope—an 88,000 square-mile frozen landmass extending from the foothills of the Brooks Mountain Range to the Arctic Ocean, as shown in figure 1.1. The Prudhoe Bay area, located about 250 miles north of the Arctic Circle and about 1,200 miles south of the North Pole, had no local road system and was inaccessible by tanker most of the year because extremely cold temperatures freeze the nearby Arctic Ocean. Consequently, oil companies began planning the construction of the Trans-Alaska Pipeline System—an 800-mile pipeline to transport oil from the frozen Alaskan North Slope to Valdez, on Alaska’s Prince William Sound, for shipment to distant refineries. The Congress approved pipeline construction in November 1973, and construction was completed in July 1977. The first commercial tanker carrying Alaskan North Slope oil from Valdez left for the U.S. West Coast on August 1, 1977. Alaska contains huge quantities of crude oil. The Prudhoe Bay discovery was the largest in North America. Oil companies estimate that the state had at least 41 billion barrels of oil in place at the time of the North Slope discovery. According to Alaska Department of Natural Resources data, updated May 1998, an estimated 19.5 billion barrels were extractable using today’s technology and under prevailing economic conditions (commonly referred to as proven reserves). Of the 19.5 billion barrels of proven reserves, about 13.8 billion barrels have already been produced by 22 fields. Thirteen Alaskan North Slope fields that contained an estimated 18.2 billion barrels of proven reserves have produced about 12.5 billion barrels. Prudhoe Bay, the oldest and largest field on the Alaskan North Slope, accounted for about 73 percent of those reserves and about 80 percent of total production. The remaining proven reserves are contained in nine Cook Inlet fields that have already produced about 1.2 billion barrels. Since 1978, the first full year of Alaskan North Slope oil production after the completion of the Trans-Alaska Pipeline, Alaska has accounted for between 14 and 25 percent of U.S. crude oil production and has ranked among the largest U.S. crude oil-producing states every year. The Alaska Department of Natural Resources’ estimates, however, did not include all Alaska oil. The estimates excluded Alaskan North Slope oil fields in various stages of development that had not produced measurable quantities of oil by 1998. They also excluded the Alaska National Petroleum Reserve, the Arctic National Wildlife Refuge, and undeveloped Outer Continental Shelf areas. Oil analysts believe these areas contain billions of barrels of proven reserves. British Petroleum-Amoco Corporation (BP-Amoco), Atlantic Richfield Company (ARCO), and Exxon have controlling interests in most Alaskan North Slope oil production. As shown in figure 1.2, in 1998 these three companies owned production rights for over 90 percent of the Prudhoe Bay field and accounted for over 90 percent of all the oil removed from the Alaskan North Slope. Fourteen other companies also had production interests in the Alaskan North Slope in 1998, including companies owned by native Alaskan groups. The addition of Alaskan North Slope oil production to the oil produced in California and other West Coast states meant that, for the first time, production on the U.S. West Coast was greater than West Coast refiners’ demand for crude oil. Consequently, oil producers in Alaska looked to other markets. Figure 1.3 shows the historical shipping routes for Alaskan North Slope oil and the location of potential refining markets. This figure illustrates the principal difference between these potential markets— namely, the distance between these markets and the Port of Valdez. Generally, shorter shipping distances translate into lower transportation costs and higher profits for oil producers, although other factors, such as tanker size, also affect costs. The West Coast is the closest domestic market for Alaskan North Slope oil, and Asia is closer than most other U.S. markets, such as the U.S. Gulf Coast and U.S. Virgin Islands. However, the Congress had banned the export of Alaskan North Slope oil. Therefore, Alaskan North Slope oil producers took oil not sold on the West Coast to more distant domestic markets. The proximity to Valdez, along with the ban on exports, made the West Coast the preferred destination for the sellers of Alaskan North Slope oil. Because this oil’s characteristics (weight and sulfur content) differed from those of foreign oil, refiners had to invest in additional refining equipment to handle the Alaskan North Slope oil. After West Coast refiners retooled to efficiently process that oil, Alaskan North Slope oil took the place of much of the foreign oil that West Coast refiners had imported. In 1998, Alaskan North Slope oil constituted about 43 percent of all crude oil refined on the West Coast. The discovery of oil on the Alaskan North Slope, along with the export ban, also had an effect on the U.S. oil-shipping industry. U.S. shipyards built over 50 tankers in the 1970s and 1980s to carry crude oil from Valdez to distant refineries. Until the Congress lifted the ban on exporting Alaskan North Slope oil, tankers transported the Alaskan North Slope oil to U.S. ports. As a result, the tankers were required to comply with the Jones Act. The Jones Act, along with several related trade laws, require that any vessel transporting cargo between U.S. ports must be U.S.-built, U.S.-flagged (registered), U.S.-owned, and U.S.-crewed. Under an exception in the Jones Act, foreign-built tankers were allowed to transport oil from Valdez to the U.S. Virgin Islands. The Congress banned exporting Alaskan North Slope oil when it authorized the construction of the Trans-Alaska Pipeline in 1973. The legislation, which was enacted in the midst of the Arab oil embargo, amended the Mineral Leasing Act of 1920 and restricted the export of U.S. oil transported over a federal right-of-way. Exports were allowed only if the President found that they would not diminish the quantity or quality of oil available to the United States and were in the national interest. The Energy Policy and Conservation Act of 1975, the Export Administration Act of 1979, and various other laws provided additional restrictions on Alaskan North Slope oil exports. These restrictions were intended, in part, to reduce U.S. dependency on foreign oil, ensure that Alaskan North Slope oil would be used to benefit U.S. citizens, and protect the U.S. economy from a drain of scarce resources. The export ban was controversial from its beginning, and the pros and cons of lifting it were debated in congressional hearings and in other discussions for years. In addition, several studies addressed the likely effects of lifting the ban. At issue was who would benefit and who would not benefit from lifting the ban. Advocates of lifting the export ban argued that it created a surplus of Alaskan North Slope oil on the West Coast, in turn depressing price and production and limiting state governments’ revenues. For example, the Department of Energy concluded in 1994 that lifting the ban on exporting Alaskan North Slope oil would (1) increase the price of the oil by expanding its markets, (2) increase Alaska and California revenues through increased royalties and taxes, and (3) generate new economic activity and employment in Alaska and California. Moreover, these benefits were expected to accrue without an increase in gasoline prices. Opponents argued that lifting the ban would have adverse consequences. For example, in a 1995 report prepared for the Coalition to Keep Alaska Oil, consultants agreed with the Department of Energy’s 1994 conclusions that the price and production of Alaskan North Slope oil would increase. But they also predicted that oil companies’ export-related revenue and production gains would be small and of short duration because the West Coast would become dependent on foreign imports. The consultants also predicted that refiners that only refine crude oil and do not produce oil (commonly referred to as independent refiners) would become dependent on Alaskan North Slope oil because they would have no practical access to cheaper foreign oil and their profit margins would decrease. Furthermore, the report stated, consumers’ prices would increase because crude oil prices would be higher. Finally, allowing companies to export oil on foreign-built tankers instead of more costly U.S.-built tankers was expected to hurt the U.S. shipping industry. In 1990, we reported that lifting the ban would likely increase the price of Alaskan North Slope oil. We reported that some oil would likely be exported to Asia instead of being shipped to the U.S. East and Gulf Coasts, the U.S. Virgin Islands, Puerto Rico, and possibly some U.S. West Coast ports because transportation costs to Asia were lower. We also reported that lifting the ban would promote economic efficiency by increasing domestic oil production and allowing better use of refinery resources. Finally, we stated that lifting the ban would increase the decline in demand for U.S. tankers because Alaskan North Slope oil would be exported on foreign-flagged instead of U.S.-flagged tankers. In 1995, the Congress lifted the ban on exporting Alaskan North Slope oil (P.L. 104-58, title II). The 1995 act eliminated the export restrictions in the Mineral Leasing Act of 1920 and various other statutes and regulations. The act also requires that oil tankers transporting Alaskan North Slope oil to foreign destinations be U.S. documented (including U.S.-registered and U.S.-crewed) but not necessarily U.S.-built. According to the conference report accompanying the 1995 legislation, the purpose of lifting the export ban was to allow Alaskan North Slope crude oil to compete with other crude oil in the world market under normal market conditions. The first commercial tanker exporting Alaskan North Slope oil left Valdez for Asia on May 31, 1996, about 6 months after the ban was lifted. The 1995 law required us to review Alaska and California energy production and the effects of lifting the ban on independent oil refiners, consumers, and shipbuilding and ship repair yards on the West Coast and Hawaii. As agreed with the Senate Committee on Energy and Natural Resources and the House Committees on Resources and on Commerce, this report responds to that mandate and addresses the effects of lifting the export ban on (1) Alaskan North Slope and California crude oil prices and production and (2) refiners, consumers, and the oil-shipping industry (including the tanker fleet, the tanker building industry, and the tanker repair industry) on the U.S. West Coast. To put the effects of lifting the ban in context, this report discusses changes in Alaska and California production during the past decade (1989 through 1998). This report also discusses export-related environmental issues resulting from lifting the ban (see app. I). To assess the effect of lifting the export ban on Alaskan North Slope and California crude oil prices and production, we collected and analyzed crude oil price and production data from the Department of Energy, the Alaska Departments of Natural Resources and of Revenue, the California Departments of Conservation and of Revenue, selected oil producers and refiners, the Alyeska Pipeline Service Company—the organization that operates the Trans-Alaska Pipeline System—and Platts Oil Prices Data Base as reported by Standard & Poor’s DRI. We also reviewed previous GAO reports, studies, and other available literature. In addition, we interviewed federal, state, and oil industry officials to obtain their views on the effects of lifting the ban. Furthermore, we conducted statistical analyses using oil-price data before and after the ban was lifted to determine how lifting the export ban had affected the prices of Alaskan North Slope and California oil. A complete discussion of our statistical and economic analyses for determining the effects of lifting the export ban on Alaskan North Slope and California crude oil prices is in appendix II. To assess the effects of lifting the export ban on refiners, consumers, and the oil-shipping industry on the West Coast, we interviewed West Coast crude oil-refining officials, consumer groups, and oil-shipping industry officials to obtain their views on the effects of lifting the ban. We also conducted statistical analyses of the effects of lifting the export ban on the prices of key petroleum products used by West Coast consumers. These analyses were similar to those used to determine the effects of lifting the ban on oil prices. Furthermore, to review the effects of oil exports on the U.S. oil-shipping industry, we talked to Alaskan North Slope oil industry officials, tanker fleet operators, shipbuilding and ship repair industry officials, maritime union representatives, state environmental groups, and state and federal officials. We contacted federal agencies, including the U.S. Maritime Administration and U.S. Coast Guard within the Department of Transportation and the U.S. Customs Service. We also interviewed state officials in Alaska, California, Oregon, and Washington State, and industry officials in these states (including officials with oil companies that refine oil in Hawaii) and in Washington, D.C. From these officials, we obtained and analyzed selected data and records to understand trends in the Alaskan North Slope shipping, shipbuilding, and ship repair industries and to identify the impact of oil exports on these industries. In addition, where applicable, we applied established economic concepts and theories to predict the likely effects on Alaskan North Slope and California crude oil production in the future. When important price, production, refining, and shipping data were unavailable because they were proprietary, we attempted, to the extent possible, to obtain such information from alternative sources. However, because of proprietary data limitations, we were unable to determine the full effects of lifting the export ban on cost increases for refiners using Alaskan North Slope or comparable California oil or on the U.S. West Coast market in general. We provided a draft of this report to the Department of Energy, including its Energy Information Administration and Office of Policy, for review and comment. We discussed the report with Energy Information Administration officials, including the Director, Petroleum Division, and Office of Policy staff. While the Department did not take a position on the findings presented in the report, it provided clarifying comments that we incorporated, where appropriate. We conducted our work from July 1998 through June 1999 in accordance with generally accepted government auditing standards. Lifting the export ban raised the prices of Alaskan North Slope and some California oils between $.98 and $1.30 higher per barrel than they would have been had the ban not been lifted. To date, these price increases have not had an observable effect on Alaskan North Slope and California crude oil production. Nevertheless, future oil production should be higher than it would have been had the ban not been lifted because higher crude oil prices have given producers an incentive to produce more oil. According to oil industry officials, new oil fields developed in Alaska since the ban was lifted are expected to increase Alaskan North Slope oil production by an average of 115,000 barrels per day for the next two decades. However, we could not separate the effects of lifting the ban on expected production from the effects of broader oil market changes occurring at the same time. For example, relatively high world oil prices in 1996 and 1997 encouraged oil producers to expand exploration and development, while low prices in 1998 caused producers to close wells and reduce development. Moreover, this expected production increase will not reverse the decade-long decline of Alaska and California oil production, which is expected to continue as aging oil fields become depleted. While world oil prices have been volatile since the export ban was lifted, the price of Alaskan North Slope and some California oil sold in the West Coast market is higher than it would have been had the export ban not been removed. Allowing exports to Asia meant increased demand for Alaskan North Slope oil and higher prices. To determine the effect of lifting the ban on oil prices, we developed a time-series model. Because oil prices are influenced by many factors other than removing the ban, we had to control for these other factors. We did this by modeling the differences between the prices of West Coast oils and the prices of similar oils in other markets. Our analysis indicates that the market price of Alaskan North Slope oil rose compared with the prices of three oils--Brent Blend, Nigerian Forcados, and West Texas Intermediate. The price increase for Alaskan North Slope oil relative to these three oils ranged from $0.98 to $1.30 per barrel. The effect of lifting the ban on California oil prices depends on the type of oil examined. Light-weight oil with a low sulfur content is higher quality and more valuable than heavy oil with high sulfur content because high-quality oil costs less to refine into gasoline and other light petroleum products. Alaskan North Slope oil is lighter weight and has a lower sulfur content than most California oils. Our analysis indicates that the price of “Line 63” oil in California, which is similar in quality to Alaskan North Slope oil, rose by $1.28 per barrel compared with the price of West Texas Intermediate oil as a result of lifting the ban. However, the effect of lifting the ban on the prices of two other Californian oils we examined (Kern River and THUMS) was insignificant. These two oils are heavy in contrast with Alaskan North Slope and Line 63 oil, which may explain why their prices did not respond to the removal of the export ban in the same way. Appendix II explains the methodology we used to estimate these price increases as well as the economics explanation for why oil prices were expected to increase when the ban was lifted. Lifting the export ban also resulted in lower shipping costs for oil exported to Asia. For example, total transportation cost in 1996 for oil sold in Asia was about $4.51 less per barrel than for oil sold on the U.S. Gulf Coast. Overall, shipping costs fell by at least $15 million in 1996, $28 million in 1997, and $22 million in 1998 from what they would have been had oil not sold in the West Coast market continued to go to other domestic markets. Like higher oil prices, lower shipping costs improve oil companies’ incentives to produce more oil. Table 2.1 shows the differences in length of tanker voyages, pipeline tariffs, and total transportation costs per barrel for oil shipped from Valdez, Alaska, to Asia and the U.S. Gulf Coast, the U.S. Virgin Islands, and the Mid-Continent in 1996. As the table shows, an average tanker trip to Asia took 30 days, while the average trip to the Gulf Coast took 41 days. In the case of oil sold in the Gulf Coast and the Mid-Continent, shippers paid pipeline tariffs in addition to tanker costs. The additional pipeline tariff was approximately $0.82 per barrel for Gulf Coast shipments and $2.17 per barrel for Mid-Continent shipments. U.S. Virgin Islands shipments went by tanker from Valdez around Cape Horn. This route took an average of 84 days, or about twice as long as the next shorter route. However, the shipping costs to the U.S. Virgin Islands were slightly lower than for the much shorter journey to Asia because the oil companies used larger foreign tankers with foreign crews to transport the oil to the U.S. Virgin Islands. Foreign tankers are much less costly to build, and operating costs for foreign-crewed vessels are lower than for U.S.-crewed vessels. Although the 1995 law does not prohibit exports on foreign-built tankers, all shipments of Alaskan North Slope oil other than to the U.S. Virgin Islands have gone on U.S.-crewed tankers. Table 2.1 also shows the average costs for West Coast shipments in 1996. As the table shows, the West Coast is the lowest cost destination for Alaskan North Slope oil. Higher market prices for Alaskan North Slope oil and lower shipping costs for exported oil have given oil producers incentive to produce more crude oil. To date, however, this incentive has not had an observable effect on Alaskan North Slope or California crude oil production. Oil industry officials told us that any effects on production would not occur immediately. There is a lag between the time producers begin to receive higher prices for Alaskan North Slope oil and the time it takes for additional development activities to produce more oil. Oil companies began developing several new fields after the export ban was lifted, and production from these fields is projected to add significantly to future Alaskan North Slope production. Figure 2.1 shows the expected impact—starting in 1999—of the development of new fields since the export ban was lifted on production levels. The bottom line in the figure shows the current projected production of fields that existed prior to the lifting of the export ban. The top line shows the current projected production of all fields--including those that were developed and those for which development has been planned and approved–-since the export ban was lifted. The additional projected production between 1999 and 2020 from these new fields is about 115,000 barrels per day, on average. Some oil industry officials told us that some of these new developments were in response to the removal of the export ban, while others said it was difficult to point to one factor to explain the change. We found no evidence of a similar increase in oil production in California. Overall oil production in California has continued to decline in the years since the ban was lifted, and we did not observe an expansion of development activity. While an increase in the market price of some California oils would be expected to lead to increased levels of production, none of the oil producers contacted said they had increased their production as a result of lifting the ban. We could not separate the effects of lifting the export ban on expected production increases from the effects of broader oil market changes occurring at the same time. Among the other factors positively affecting production decisions were generally high oil prices in 1996 and 1997 and improvements in oil exploration and recovery technology. Higher oil prices encourage greater investment in production and exploration. Average market prices for Alaskan North Slope oil in 1996 and 1997 were $17.74 and $20.90 per barrel, respectively, compared with $15.86 in 1998. Similarly, improved production and exploration technology has lowered production costs, providing greater incentive to produce more oil. More recently, low oil prices in 1998 caused California oil producers to close some oil wells to avoid maintenance costs. The low prices also caused Alaska oil producers to delay planned investments and development. Oil company officials, government analysts, and industry experts told us that separating the effects of lifting the export ban from such other factors is difficult if not impossible. The expected increase in Alaskan North Slope oil production from lifting the ban will not reverse the long-term decline in oil production in Alaska and California as aging oil fields in these states become depleted. As shown in figure 2.2, crude oil production in both Alaska and California decreased almost every year from 1989 through 1998. During that period, Alaska production decreased by about 35 percent, or about 696,000 barrels per day, primarily because increased production in new, relatively small oil fields did not offset decreased production in large aging fields. New fields and fields that had been closed but were reopened during that period added about 236,000 barrels per day in 1998, which was less than the production decrease in the Prudhoe Bay field, the oldest and largest oil field on the Alaskan North Slope. By 1998, the Prudhoe Bay field was about 74-percent depleted, and production was about half the 1989 level—about 713,000 barrels per day versus about 1.43 million barrels per day. California production also decreased by about 9 percent during that period, or about 94,000 barrels per day, because production in new fields did not offset decreased production from aging fields. Low oil prices in 1998 also discouraged California production. Alaska revenue rose because of the higher market prices and lower shipping costs that resulted from lifting the export ban. Alaska’s petroleum revenue comes from severance taxes, royalties, corporate income tax, property tax, and petroleum rent and lease bonuses. Royalty, severance tax, and income tax revenue are based on the value of oil after excluding pipeline tariff and transportation costs. In April 1998, the Alaska Department of Revenue estimated that the annual increase in revenue resulting from higher West Coast market prices for Alaskan North Slope oil was $40 million. The officials also estimated that the annual increase in revenue from lower shipping costs to Asia was $10 million. These effects were the direct result of lifting the export ban. California revenue comes from a share of federal royalties, income taxes, and property taxes. California officials told us that they receive relatively little revenue from these sources. Consequently, there was no significant change in revenue as a result of lifting the export ban. Lifting the oil export ban has had limited effects on refiners, consumers, and the oil-shipping industry—including Alaskan North Slope fleet operators, shipbuilders, and tanker repair yards. Higher market prices for Alaskan North Slope and some California oil increased some refiners’ costs but had no or an unclear effect on other refiners’ costs. Despite higher crude oil costs for some refiners, West Coast consumers appear to have been unaffected by lifting the ban because the prices of important petroleum products they use have not increased. There have also been minimal effects on the shipping industry to date, although shipbuilding and repair industry officials are concerned that business may shift in the future to low-cost foreign shipyards. While higher prices for Alaskan North Slope and comparable California oil increased the costs of some individual refiners, we could not determine the extent of the cost increase for these refiners or for the West Coast market in general. Proprietary data needed to make the determination were not available. The impact of rising costs on refiners depends on their ability to pass these costs on to consumers by raising the prices of the petroleum products they sell. Higher market prices for Alaskan North Slope and comparable California oil translate directly into higher costs for refiners buying this oil on the market. However, not all refiners are affected equally. We looked at three hypothetical cases. First, a refiner buying large volumes of Alaskan North Slope and comparable California oil would experience cost increases when the prices of such oil rise. In the case in which a refiner buys nothing but this oil and always at the market price, costs would rise by exactly the amount the price increased as a result of lifting the ban—about $.98 to $1.30 per barrel on the basis of our analysis. Second, the costs for a refiner buying little or no Alaskan North Slope or comparable California oil would be largely unaffected by increases in the market prices of this oil. Finally, for some refiners that refine mostly oil that comes from their own companies’ wells, the effect of the increase in the market price of the oil they produce and refine is unclear because their oil is not sold in the market. Data on refiners’ crude oil purchases and the prices paid are unavailable because they are proprietary. Therefore, we could not determine the increase in refiners’ costs because of higher Alaskan North Slope and California oil prices that resulted from lifting the ban. Some refiners we contacted said they pay higher prices for this oil, some said they were unaffected, and others said it was analytically impossible to determine the effect. However, none of the refiners shared specific cost data with us. The extent to which refiners can pass higher costs on to consumers determines how their profits are affected by increased crude oil prices. The ability of West Coast refiners to pass rising crude oil costs on to consumers may be constrained by competitive oil market conditions. All refiners were not affected equally by increasing oil costs. Therefore, those refiners whose costs increased the most may not be able to increase their product prices to fully recoup the costs without losing sales to those refiners whose costs did not rise by as much. Increases in crude oil costs not passed on to consumers in the form of higher prices will reduce profit margins for refiners. West Coast refiners we contacted did not reveal the extent to which they passed on increased acquisition costs for crude oil to consumers. We analyzed the differences between the prices of West Coast petroleum products and the prices of the same products in other U.S. markets. Our analysis indicates no significant changes in the prices of regular unleaded gasoline, diesel, and jet fuel as a result of lifting the export ban. In 1998, these three products accounted for more than 80 percent of the total output of West Coast refineries, as well as the bulk of consumers’ expenditures on petroleum products. These products were chosen because they are good indicators of any potential change. Lifting the oil export ban has had a limited effect on the Alaskan North Slope oil tanker fleet, the U.S. shipbuilding industry, and the West Coast tanker repair industry. Overall, most tankers carrying Alaskan North Slope oil continue to take the oil to the U.S. West Coast, and the demand for U.S. tankers to transport Alaskan North Slope oil has continued to decline, although exports have slightly offset the decline. Foreign-built tankers have not been used to export Alaskan North Slope oil, and U.S. shipbuilders have not lost orders for new tankers to foreign shipyards. Furthermore, there has not been a trend toward more foreign repairs of Alaskan North Slope tankers since exports began. Nevertheless, U.S. shipbuilding and West Coast repair yard officials are concerned that they may lose future business to foreign shipyards in part because of oil exports. Lifting the oil export ban has not greatly altered the number and routes of tankers used to transport Alaskan North Slope oil to date. While the 1995 law that lifted the ban does not require companies to use U.S.-built tankers for export shipments, the fleet serving the Alaskan North Slope remains basically domestic, both in vessel registration and shipment destinations. Moreover, this fleet is almost entirely owned by, or under long-term-charter to, the major Alaskan North Slope oil producers. The number of tankers used to transport Alaskan North Slope oil from Valdez has been decreasing steadily in the 1990s, as a result of the downward trend in Alaska oil production. In 1998, the Valdez fleet had 30 tankers, compared with over 50 in 1990. Lifting the ban has not significantly altered Alaskan North Slope shipping operations. Most of the oil produced continues to be shipped to West Coast refineries. A small percentage—about 5 percent—of the oil has been exported since the export ban was lifted. The major oil producers in Alaska ship most of their oil to West Coast states, particularly Washington and California—to refineries around Puget Sound, San Francisco, and Los Angeles. In 1998, the average volume shipped to West Coast refineries was a little over one million barrels per day, carried by 30 tankers in 465 shipments. In comparison, only one major producer—BP-Amoco—has been a significant exporter. Since exports began in May 1996, it has exported an average of about 60,000 barrels per day. For example, in 1998, five different tankers chartered to BP-Amoco took a total of 20 shipments to Korea, China, Japan, and Taiwan. An Exxon tanker also took one shipment to Japan in 1997 and one in 1998. Recent trends in major destinations and volumes shipped are shown in table 3.1. (continued) As shown in table 3.1, the volume of oil shipped to Washington/California and Hawaii has decreased gradually in recent years, while the volume shipped to Alaska increased from 1994 through 1997, then decreased in 1998. At the same time, the volume shipped to the U.S. Gulf Coast via Panama and to the U.S. Virgin Islands around Cape Horn fell to zero after the export ban was lifted. According to federal maritime and industry officials, both the U.S. Gulf Coast and U.S. Virgin Islands destinations were declining even without the influence of exports because, compared with U.S. West Coast destinations, they involve high shipping costs, especially the shipments to the U.S. Gulf Coast. Some officials said that export shipments in effect replaced the trade with the U.S. Virgin Islands and accelerated its end. Exports have affected some tanker operators more than others. Officials of ARCO and Exxon, which have subsidiaries that own and operate tankers in the Alaskan North Slope trade, said that because they have made few, if any, export shipments, lifting the export ban has had little or no effect on their Alaskan North Slope tanker fleets. However, officials of BP-Amoco (which is not a U.S.-owned corporation and therefore is not permitted to own tankers engaged in the U.S. domestic trade) said that exports to Asia allow the company to lower its transportation costs and thus provide an important new market. In addition, officials of the charter shipping companies that carried exports for BP-Amoco said that the export legislation benefited their business. These officials said that exports have slightly increased the demand for U.S. tankers to carry Alaskan North Slope oil. According to officials of two companies, because of exports, a few of their tankers that might otherwise have been unused were active in the Alaskan North Slope fleet. Our analysis confirmed that while overall fleet size continues to decrease, exports may have slightly increased the demand for U.S. tankers in the Alaskan North Slope trade in 1996 and 1997. Exports have led to the disappearance of foreign-registered tankers from the Alaskan North Slope fleet and may therefore have caused an increase in jobs for U.S.-tanker crews. Foreign tankers with foreign crews carried Alaskan North Slope oil from Valdez to the U.S. Virgin Islands under a long-standing exception in the Jones Act. As shown in table 3.1, before the ban was lifted, oil was shipped from Valdez around Cape Horn to refineries in the U.S. Virgin Islands. Several foreign-registered, foreign-crewed tankers made these trips. According to our analysis, lifting the ban caused these foreign tankers and crews to be replaced by U.S.-crewed tankers going to Asia. Tankers carrying Alaskan North Slope oil from Valdez to Asia to date have been U.S.-documented (including U.S.-registered and U.S.-crewed) and U.S.-owned, as required by the 1995 legislation that lifted the export ban. As a result of this change in destinations, the equivalent of one or two additional U.S. tankers were used to carry Alaskan North Slope oil in 1996 and 1997, creating an estimated 58 to 115 U.S. tanker crew jobs. These jobs partially offset the overall decrease in U.S. tanker crew jobs in the Alaskan North Slope trade during the past decade caused by declining crude oil production and fleet size. To date, lifting the oil export ban has also had a limited effect on the U.S. shipbuilding industry. Demand for new tankers for the Alaskan North Slope trade—either U.S. or foreign-built—appears to be minimal at present and driven primarily by factors other than exports. Since the export ban was lifted, Alaskan North Slope tanker operators have had the option of exporting oil in foreign-built tankers, but to date they have not done so. Likewise, U.S. shipyards have not lost orders for new Alaskan North Slope export tankers to foreign shipyards. Although several U.S. shipyards are equipped to build Alaskan North Slope tankers, no U.S. shipyard has delivered one since 1987. According to industry officials, U.S. shipbuilders have been at a price disadvantage in the world commercial shipbuilding market because of, among other reasons, higher costs and less-modern production methods. U.S. shipbuilders and other industry officials expected 10 or more new orders in the 1990s for tankers to serve the Alaskan North Slope. These expectations resulted in part from the enactment of the Oil Pollution Act of 1990, in response to the Exxon Valdez accident. The act mandated, among other things, the phaseout of single-hulled tankers and the transition to double-hulled tankers by 2015, in order to reduce the effects of oil spills in the event of accidents. However, only three orders have materialized so far. All three orders were from ARCO for tankers to be built by Avondale, Inc., of Louisiana, and to be delivered between 2000 and 2002. Additionally, a proposed order from BP-Amoco for three tankers to be built by the National Steel and Shipbuilding Company, of San Diego, was deferred indefinitely in October 1998. According to industry officials, factors in the lack of orders to date include falling oil prices in 1998 and their effect on Alaskan North Slope planning and development, as well as the price of new tankers—in some cases up to three times as much in U.S. shipyards compared with overseas yards. Despite the lack of tanker demand to date, there could be some demand for new Alaskan North Slope tankers in the next decade, according to shipbuilding and oil company officials. As shown in figure 3.1, under Oil Pollution Act of 1990 requirements, 26 Alaskan North Slope tankers are due to be phased out of the fleet by 2015. As shown in figure 3.1, 19 tankers serving the Alaskan North Slope are to be phased out by the end of 2006. Some of these tankers, but not all, would need to be replaced, assuming that Alaskan North Slope production continues to decline. Oil companies would have replacement alternatives to new U.S.-built tankers, including (1) extending the life of existing tankers by converting the hulls and (2) using existing or new foreign-built tankers for exports. Oil company officials told us that their needs for future U.S. tankers will depend on various oil industry and market factors. Although introducing foreign-built tankers into the Alaskan North Slope trade to carry exports is an option, oil company officials told us they have no plans to do so in the foreseeable future. Nevertheless, officials in the U.S. shipbuilding industry said they are concerned about losing future Alaskan North Slope tanker orders to overseas shipyards, in part because of exports. They contend that the export option gives oil companies an added incentive to further postpone orders for new U.S.-built tankers. According to these shipbuilding officials, foreign-built tankers to export Alaskan North Slope oil are a possibility within a few years, if not immediately. If so, jobs in U.S. shipyards could be affected. According to company officials, each tanker order postponed or lost to a foreign competitor costs about 1,000 U.S. shipyard jobs for the 18 months it takes to construct a tanker. In addition, postponed tanker orders contribute to the aging of the Alaskan North Slope fleet, with a potential impact on fleet safety. Because no new tankers have entered the fleet since 1987, half of the fleet consists of single-hulled tankers built in the 1970s or before. Even though the oldest tankers have been phased out of service, the phaseout has been so gradual that, on average, the remaining fleet has gotten older. The average age of the fleet has increased since the Oil Pollution Act of 1990 was passed— from about 16 years old in 1990 to 21 years old in 1998. The ability to export Alaskan North Slope oil has given tanker operators an added incentive to repair tankers overseas rather than on the West Coast because they can reduce costs by combining oil shipments to Asia with less expensive Asian repairs. However, since the export ban was lifted, there has not been a trend toward more overseas repairs. Tankers serving the Alaskan North Slope undergo major, scheduled “drydock” repairs about twice every 5 years at a cost of $1 million to over $10 million each. A drydock repair can take a tanker out of service for several weeks. Exact information on the number of Alaskan North Slope tanker repairs for recent years was unavailable. However, according to data supplied by industry officials, and on the basis of recent fleet size, we estimate that about 10 to 15 such repairs have occurred annually for tankers serving the Alaskan North Slope in recent years. On average, repairs have been decreasing in the 1990s at a rate that is commensurate with the decline in Alaskan North Slope production and fleet size. Three West Coast repair yards, in California, Oregon, and Washington State, compete with several Asian yards for the Alaskan North Slope tanker repair business. These West Coast yards are situated near Alaskan North Slope shipping lanes and destinations. However, according to industry officials, the U.S. repair yards are at a competitive disadvantage because Asian yards may charge less than half of what a U.S. yard would charge for a comparable tanker repair. Combining an oil shipment to Asia with a less expensive Asian repair allows tanker operators to avoid the extra cost of going without oil cargo to Asia for a repair. Overseas repairs of U.S. ships are subject to U.S. Customs duties of 50 percent of certain repair costs levied on the vessel operator. According to U.S. Customs and shipping industry data, overseas repairs of Alaskan North Slope tankers have not increased significantly since the ban was lifted, as shown in figure 3.2. As shown in figure 3.2, overseas repairs of Alaskan North Slope tankers have averaged between three and four a year. No significant trend toward more overseas repairs has developed since exports began. Of the nine total overseas repairs since 1996, seven involved the tankers of one oil company that has historically repaired its tankers overseas and has not been an exporter of Alaskan North Slope oil. Officials of the West Coast tanker repair industry said that their recent experience raised concerns that a trend toward more foreign repairs of Alaskan North Slope tankers could be beginning to develop, with exports as a contributing factor. They cited two foreign repairs of Alaskan North Slope tankers in Asia in 1998. In one of these cases, a tanker that transported crude oil to Korea underwent a scheduled drydock repair in a Korean shipyard before returning to the United States. According to West Coast repair industry officials, this case illustrates how exports may be starting to harm the West Coast ship repair industry. In the other case, the tanker went without cargo to Singapore for a scheduled drydock repair. According to operators involved in the two cases, a major factor in having repairs done overseas was the significantly lower cost in Asian repair yards compared with U.S. West Coast yards, even when U.S. Customs duties are added and even without carrying cargo, as in the latter case. According to West Coast repair industry officials, the two lost repairs represented several million dollars in business and potential lack of employment for over 500 workers a day for each repair. | Pursuant to a legislative requirement, GAO reviewed Alaska and California energy production, focusing on the effects of lifting the export ban on: (1) Alaskan North Slope and California crude oil prices and production; and (2) refiners, consumers, and the oil shipping industry on the West Coast. GAO noted that: (1) lifting the export ban raised the relative prices of Alaskan North Slope and comparable California oils between $.98 and $1.30 higher per barrel than they would have been had the ban not been lifted; (2) these price increases have not had an observable effect on Alaskan North Slope and California crude oil production; (3) nevertheless, future oil production should be higher than it would have been because higher crude oil prices have given producers an incentive to produce more oil; (4) according to projections by the Alaska Department of Revenue and to oil industry officials, new oil fields developed in Alaska since the ban was lifted are expected to increase Alaskan North Slope oil production by an average of 115,000 barrels per day for the next two decades; (5) however, it was not possible for GAO to separate the effects of lifting the ban on expected production from the effects of broader oil market changes occurring at the same time; (6) relatively high world oil prices in 1996 and 1997 encouraged oil producers to expand exploration and development activities, while low prices in 1998 caused producers to close wells and reduce development activities; (7) moreover, this expected production increase will not reverse the decade-long decline of Alaska and California oil production, which is expected to continue as aging oil fields become depleted; (8) lifting the export ban increased some refiners' costs but had limited effects on consumers and the oil-shipping industry on the West Coast; (9) while higher prices for Alaskan North Slope and comparable California oil increased the costs of some individual refiners using that oil, it was not possible to determine the extent of cost increases for those refiners or the West Coast market in general; (10) despite higher crude oil prices for some refiners, no observed increases occurred in the prices of three important petroleum products used by consumers on the West Coast--gasoline, diesel, and jet fuel; (11) lifting the ban has also had a minimal effect to date on most oil tanker operators that transport Alaskan North Slope oil, the U.S. shipbuilding industry, and the West Coast ship repair industry; and (12) however, shipbuilding and ship repair industry officials on the West Coast are concerned that Alaskan North Slope oil tanker business may shift in the future to low-cost foreign shipyards. |
WHTI implements Section 7209 of the Intelligence Reform and Terrorism Prevention Act of 2004, as amended, which requires DHS, in consultation with State, to develop and implement a plan to require U.S. citizens and other individuals for whom documentation had previously been waived to show a passport or other document, or combination of documents sufficient to denote identity and citizenship when entering the United States. DHS implemented WHTI documentation requirements at air ports of entry on January 23, 2007, and at land and sea ports of entry on June 1, 2009. The final land and sea rule provides that: U.S. citizens entering at sea or land POEs must present a valid U.S. passport, U.S. passport card, trusted traveler card, Merchant Mariner Document when traveling on official maritime business, or U.S. military ID when traveling on official orders; and Mexican nationals applying for admission as a temporary visitor for business or pleasure may present a BCC in lieu of a passport to enter the United States when arriving from Mexico at land POEs or when arriving by pleasure vessel or ferry. State, in cooperation with DHS, is responsible for the development of passport cards and BCCs. The Bureau of Consular Affairs is responsible for the issuance of passport cards and BCCs, and CBP inspects the documents at ports of entry to the United States. On December 31, 2007, State issued a final rule establishing the passport card as a lower-cost alternative to passport books —$45 for a passport card versus $100 for a passport book—for departure from and entry to the United States through land and sea ports of entry between the United States and Mexico, Canada, the Caribbean, and Bermuda. The passport card cannot be used for international air travel. In February 2008, State began accepting applications for passport cards, and in March 2008, it awarded a contract to L-1 Identity Solutions (L-1) for passport card stock, personalization equipment, and related technical services. State began issuing the first generation passport card on July 14, 2008 and the updated second generation passport card in mid-April 2010. The passport card is valid for up to 10 years and only issued to U.S. nationals, using the same application form and evidence of citizenship or nationality as required for passport books. On October 1, 2008, State assumed responsibility for the production of BCCs, issuing a redesigned, second-generation BCC. All first-generation BCCs will expire before October 2018. The design of the second generation BCC is based on the construction and security features of the passport card. State uses the same contract to procure BCC cardstock and the personalization equipment can be used to personalize both types of cards. The BCC is valid for up to 10 years and is only issued to Mexican citizens. The passport card and second generation BCC use vicinity radio frequency (RF) technology to store and transmit a unique number that can be used by CBP to retrieve information about the cardholder. As amended, the Intelligence Reform and Terrorism Prevention Act of 2004 required DHS and State to certify that they have met certain criteria prior to implementing WHTI documentation requirements at sea and land borders, including: NIST certification that the passport card architecture meets or exceeds International Organization for Standardization (ISO) security standards and best practices for protection of personal information; making the passport card available to U.S. citizens; and installing the infrastructure to process the passport cards and training employees to use the new technology at ports of entry. State and DHS certified that they met these conditions on February 24, 2009. The security of passport cards and BCCs and the ability to prevent and detect their fraudulent use are dependent upon a combination of well- designed security features and inspection procedures that utilize the available security features of the document. A well-designed document has limited utility if inspectors do not inspect the security features to verify the authenticity of the document. In 2007, we reported on the security of passports and visas, including first generation BCCs. In our report, we made several recommendations to State and DHS regarding the planning and design process for its travel documents, ensuring that needed technology is available at ports of entry, and better training for CBP officers at the ports of entry. Threats to the security of travel documents include counterfeiting of a complete travel document, construction of a fraudulent document, photo substitution, deletion or alteration of text, removal and substitution of pages, theft of genuine blank documents, and assumed identity by imposters. Features of travel documents are assessed by their capacity to secure a travel document against the following: Counterfeiting—unauthorized construction or reproduction of a travel document. Forgery—fraudulent alteration of a travel document, including attacks such as photo substitution, and deletion or alteration of text. Imposters—use of a legitimate travel document by people falsely representing themselves as legitimate document holders. Most reported passport card and BCC fraud is imposter fraud. In fiscal year 2009, CBP detected 13,530 passport cards and BCCs presented by travelers attempting to enter the United States through all U.S. POEs that were either fraudulent or were valid documents used by imposters (see table 1). Over 90 percent of these documents were genuine documents presented by imposters. The most frequent fraudulent attempts were by imposters attempting to use a legitimate BCC. Fraudulent use of passport cards and second generation BCCs is much lower than that of first generation BCCs mainly because there are many fewer issued, with over 8 million valid first generation BCCs in circulation but only about 2.3 million passport cards and 435,000 second generation BCCs issued by the end of November 2009. To combat document fraud, security features are used in a wide variety of documents, including currency, identification documents, and bank checks. Security features are used to prevent or deter fraudulent alteration or counterfeiting of such documents. In some cases, an altered or counterfeit document can be detected because it does not have the look and feel of a genuine document. For instance, in U.S. passport cards and second generation BCCs, detailed designs and figures with specific fonts and colors can often be used by inspectors to identify nongenuine documents. While security features can be assessed by their individual ability to help prevent the fraudulent use of the document, it is more useful to consider the entire document design and how all of the security features combine to help secure the document. Layered security features tend to provide better security by minimizing the risk that the compromise of any individual feature of the document will allow for unfettered fraudulent use of the document. An individual security feature may provide protection against more than one type of threat, but no feature can protect against them all and no single feature is 100 percent effective at eliminating a type of threat. Designing secure documents requires the use of a range of security features combined in an appropriate way within the document. The best protection is obtained from a balanced set of features and techniques providing multiple layers of security in the document that combine to deter or defeat fraudulent attack. The application and issuance process for the passport card is the same as for passports, using the same application form. After an application is successfully adjudicated by passport examiners at State Department passport agencies, the passport card will be produced. State personalizes each passport card by printing the photo, biographical data, and other needed information on the card. The card is then mailed to the traveler. In general, passport cards are personalized at State’s Arkansas Passport Center, but the Tucson Passport Center also has the capacity for high volume personalization of the cards and most passport agencies have the capability of personalizing limited volumes of cards. The application and issuance process for the BCC is unchanged for the second generation BCC and is managed through the U.S. consulates in Mexico. After visa officers in Mexico approve an application for a BCC, the BCCs will typically be produced at the Tucson Passport Center. Using blank BCC cardstock, State personalizes each BCC by printing the photo, biographical data, and other needed information on the card. The card is then delivered to the appropriate consulate in Mexico for issuance to the traveler. In each case, the cardstock is produced by one of L-1’s subcontractors and it incorporates the background art and some of the security features already incorporated. As will be explained later in this report, some security features are added to the card during the personalization process. In general, travelers seeking admission to the United States must present themselves and a valid travel document for inspection to a CBP officer. The inspection process requires officers to determine the admissibility of the traveler by questioning the individual and inspecting the presented travel documents. In the first part of the inspection process—primary inspection—CBP officers inspect travelers and their travel documents. The officer can then compare the information on the travel documents with information retrieved from CBP border inspection systems to determine if they may be admitted or should be referred to secondary inspection for further questioning and document examination. If additional review is necessary, the traveler is referred to secondary inspection—an area away from the primary inspection area—where another officer makes a final determination to admit the traveler or deny admission for reasons such as the presentation of a fraudulent or counterfeit travel document. State’s designs for the first and second generation passport card and the second generation BCC generally meet standards and guidance for international travel documents and DHS policies for travel credentials and, in general, the recommended security features that are not included are compensated for by other security features or would not greatly increase the security of the cards. However, while including all security features recommended by guidance and standards for international travel cards can help ensure the security of passport cards and BCCs, security assessments and testing of the cards are necessary to identify any vulnerabilities and to modify the security features to address these vulnerabilities. During its development process, State addressed most of the issues raised and recommendations made during evaluation and testing of the prototype passport card, but it either did not address some of the issues and recommendations, or it did not fully document its decisions for not doing so. Moreover, State tested and evaluated the security and durability of only prototypes of the passport card, which did not include the personalization printing or background artwork. Without fully evaluating the impact of the issues and recommendations on the security and performance of the cards and testing and evaluating the final designs for the first and second generation passport card and second generation BCC, State does not have a complete understanding of the cards’ overall security and performance. The passport card and second generation BCC generally meet International Civil Aviation Organization and Security and Prosperity Partnership standards, as well as the DHS Policy for Physical Security Features, for international travel documents. These documents provide guidance on security features and data elements to include on travel documents to prevent fraudulent use. The International Civil Aviation Organization (ICAO)—the United Nations specialized agency for civil aviation—document 9303 on machine-readable travel documents provides standards for passports and other travel documents that can be used for international travel, including recommended security standards and data elements for travel documents. The recommended security features are divided into two categories, basic security features that are considered essential and additional features recommended for enhanced security. The passport card includes 8 of approximately 11 ICAO recommended basic security features and the BCC includes 7 of the 11 basic security features. However, the security that would be offered by the missing features is either provided by other security features or would not significantly improve the security of the cards. Both cards contain many of the recommended additional features. Table 2 provides further details about the missing ICAO basic security features and the factors on the cards that mitigate their omission. The ICAO standards also provide data element requirements for the personalization of travel documents. The passport card contains 10 of the 11 required data elements and second generation BCC contain 9 of the 11 required data elements. Neither card contains the signature of the cardholder, which does not significantly impact the security of the cards because signatures are easy to forge and thus provide little protection against document fraud. In addition, the second generation BCC lacks a document number on its biographical face, which is both a security feature and data element. There is, however, a unique inventory control stock number on the back of the card. While the presence of a unique identifier is important, the location does not play a major role in the overall card security. The Security and Prosperity Partnership (SPP)—an effort among the United States, Canada, and Mexico to develop a common security strategy—developed Recommended Standards for Secure Proof of Status and Nationality Documents to Facilitate Cross-Border Travel to align with ICAO document 9303, which provide recommended nonbinding minimum standards and, for additional measures of security, best practices for documents used for travel between the United States and Canada. Both the passport card and BCC generally meet SPP recommended standards. Both cards include all 6 of the security features required to meet the minimum standard. The passport card contains all 9 of the data elements required to meet the minimum standard and the second generation BCC contains 8 of the 9 data elements required to meet the minimum standard. In addition, the cards include many security features recommended as a best practice. The second generation BCC does not have the document version data element, which indicates to inspectors the version of the document they are inspecting so that they know what the card should look like and what security features it should have. However, this is not a concern because the second generation BCC looks completely different from the first generation BCC. The DHS Screening Coordination Office created the DHS Policy for Physical Security Features as a result of its efforts to identify how DHS can improve its credentialing programs. The policy addresses physical security features that prevent counterfeiting, alteration, and fraud of credentials and provides a minimum standard for physical security features for DHS credentialing programs, including requiring a minimum of two security features. The policy also includes requirements for data elements for travel documents to enable border officers to assess the identity and admissibility of travelers. The passport card and BCC contain all required security features, the passport card contains 10 of the 11 required data elements, and the BCC contains 9 of the 11 required data elements for the travel environment specified in the policy. Neither card contains height information and the second generation BCC does not include the cardholder’s place of birth. Not including these data elements does not significantly affect the security of the cards because the cards contain layers of security to protect against fraudulent use. DHS plans to remove both height and place of birth as a minimum requirement in the next version of its policy. The designs of the passport card and BCC contain numerous, layered features that provide protection against fraudulent use (see figs. 1 and 2). For example, the OVD can help protect against counterfeiting because it is difficult to copy and recreate and it helps protect against forgery because it overlaps the photograph and biographical data, making it difficult to alter them without causing visible damage to the OVD. In addition, the complex symbolic codes and pseudocodes provide protection against counterfeiting and forgery because they are based on cardholder characteristics and cannot be accurately created for counterfeit cards or altered for forged cards unless the counterfeiter has broken the codes. Laser engraving is used to print the cardholder’s image as well as the personalization information, combining flat and tactile printing. Laser engraving permanently blackens the plastic below the surface of the card to protect against counterfeiting and forgery by making it difficult to alter without causing damage. In meetings between GAO and FDL on the security of the final passport card and second generation BCC designs after State had begun issuing the cards, FDL officials indicated that they believed that the security of the final cards against fraud is adequate. However, they continue to recommend that State use a solid polycarbonate body with laser engraving at or below the layer of background artwork to provide stronger protection against layer separation, photo substitution, and data alteration, as they had recommended when they performed the counterfeit deterrence study on the prototype passport cards during procurement. FDL also recommended to State, based on reviewing an intermediate printing of the passport card, that it add rainbow printing on the front of the card, which would make the card more difficult to copy and counterfeit. Regarding the second generation BCC, which they had not formally assessed, FDL officials suggested using a more easily recognizable, finite design for the background of the BCC, like the eagle on the passport card. It is easier to see a poor reproduction of a well-known, finite design than an abstract one, like the butte on the BCC. State officials said that they respond to recommendations based on whether the cost justifies the security benefit gained as well as potential program delays that may result from implementation. They indicated that they did not change to a solid polycarbonate body because there are problems using polycarbonate in the radio frequency identification (RFID) chip layer and it would increase the cost of the cards. In addition, at the time, the card manufacturer thought that the technology for security printing on polycarbonate was too new and State didn’t believe that using layers of polycarbonate over layers of polyvinyl chloride posed any significant problems. Since procurement, the technology for laser engraving and printing the background artwork on polycarbonate has improved, but there continue to be technical issues that impact the feasibility of its use. State also does not believe that laser engraving below the layer of the background artwork significantly improves the security of the cards because any attempt to alter the data or photo would visibly damage the card. In addition, State officials believe the recommendation to add rainbow printing on the front of the cards is more a preference than a requirement and is satisfied with having it just on the back of the cards. State officials have indicated that they will consider FDL’s suggestion for a finite design for the background of the BCC when they design new documents or redesign the existing ones. At the beginning of the development process for the passport card, State investigated available security technologies and worked with DHS, including CBP and FDL, to determine which physical security technologies and features to require for passport cards. These included laser engraving printers for personalization, tactile element(s) over the photo area, a logo with color shifting ink, and an optically variable device either provided by State or proposed by vendor. In addition, State, based on input from DHS, included a vicinity read RFID chip to facilitate faster processing at ports of entry. The RFID chip stores a unique number that references cardholder information in State’s issuance databases. State also determined that the cards must comply with ICAO recommendations for card format official travel documents. These requirements were incorporated into the procurement solicitation issued in May 2007. The source selection and procurement process began when State developed the request for proposal (RFP), which was released in May 2007. The contract was awarded to L-1 in March 2008 for passport cards. During the source selection and procurement process for passport cards, prototype passport cards from prospective contractors underwent evaluation and testing related to durability, RFID performance, and security requirements. Sandia National Laboratory (Sandia) evaluated the durability and radio frequency (RF) effectiveness against national and international standards; CBP tested the RFID performance in mock CBP vehicle lanes; and FDL performed counterfeit deterrence studies. State implemented most of the recommendations made and addressed most of the issues raised during evaluation and testing. For example, in response to FDL recommendations, State embedded the OVD below the surface of the card and included microline printing in the background artwork. In addition, State either amended the RFP based on NIST’s recommendations or provided a written reason why a recommended change was not made. While State addressed most of the issues raised and recommendations made during evaluation and testing of the prototype passport card, it either did not address some of the issues and recommendations or did not document its reasons for not doing so. For example, State did not assess the risk of not following FDL’s recommendation that State submit the final passport card for analysis of the security features, which State did not do because it was in the final stages of procurement when the design was finalized and it wanted to meet schedule, or FDL’s recommendation that it add rainbow printing to the front of the card. State also did not assess the potential risk posed by the card’s failure to meet peel strength and ultraviolet light exposure test requirements that were found during Sandia’s tests prior to the issuance of the cards. While State officials do not believe that the problems identified by the failed tests will affect the operational use of the cards, they were not able to explain why these failures were not assessed prior to decisions to proceed with card production. Moreover, State assessed, but did not document its reasons for not addressing FDL’s concern that the shallow depth of the laser engraving left the cards susceptible to alteration and recommendation to use a solid polycarbonate body to mitigate this. State officials decided not to follow the recommendation to use a solid polycarbonate body based on the costs and benefits of implementing it; they believe that the depth of the laser engraving was sufficient and decided against using a solid polycarbonate body due to cost and technical issues. Without performing and documenting a full assessment of recommendations made and problems found during testing and evaluation, including the potential effect not addressing them could have on the performance of the card, State does not fully understand the security and durability of the card. After awarding the contract for passport cards, the contractor manufactured cards according to State’s final design, which were made into exemplars—genuine documents used for training purposes. These cards were inspected for problems with the security features and printing and any problems were recorded. Some of the cards were also sent to CBP to test the RFID performance. State indicated that it encountered a small percentage of manufacturing problems and the cards met CBP RFID performance requirements. The second generation BCC underwent similar inspection of the security features and printing after it was added to the passport card contract and manufacturing began. State designed the background artwork as well as codes that are embedded into both the passport card and BCC during personalization. These codes vary between the passport card and BCC, with the BCC containing more codes with greater depth and complexity because it was produced later, providing State with more time to develop them. The codes are based on the holder’s personal information. The simplest codes can be used for document authentication by primary inspectors and the most complex codes can be used for forensic analysis. While testing and evaluation was performed on prototype passport cards during the source selection process, these activities did not assess security features designed by State, including the background artwork or embedded personalization codes. The focus of the test and evaluation activities was to evaluate offerings from prospective contractors. Security features that were added or changed from the prototype passport cards and incorporated into the final passport card were also not evaluated and durability testing was not performed on the final design, despite failures encountered during testing. Further, because the second generation BCC was added to the passport card contract, it did not undergo any formal security testing and evaluation activities and no security or durability testing was done on the second-generation passport card, which includes changes to the card construction due to the inclusion of a different RFID chip. The background artwork and the security features added during the personalization process are key components of the layered security of the passport card and second generation BCC. However, without tests or evaluations that demonstrate the ability of these features to effectively contribute to the security of the cards, State does not have the needed assurance that its cards have been designed with adequate security. State has completed a redesign of the passport card with the primary purpose of incorporating a new RFID chip that has a unique tag identifier. The use of the unique tag identifier is intended to prevent cloning of the RFID chip. State took the opportunity to incorporate changes to improve the physical security features of the card, including using more robust layers of pseudocodes that bring them to the depth and complexity of those used on the BCC and a more complex OVD. The updated card also contains additional physical security features, including a secondary image of the cardholder and steganography in the primary image and microprinting in the secondary image of the cardholder. State began issuing the second generation passport card in mid-April 2010. The redesigned card has not undergone formal security or durability testing and evaluation. State officials believe that evaluation activities were not necessary because the appearance of the card is so similar to the one currently issued, the changes improved the security of the card, and it did not consider the durability failures encountered during prototype passport card testing to be significant. In 2007, we recommended that State periodically reassess the security features when planning the redesign of its travel documents. State agreed with the recommendation and has taken steps to address it. However, there was no assessment of the final passport card or second generation BCC prior to issuance and there is no plan to formally assess the second generation passport card prior to issuance. Such an assessment could identify potential vulnerabilities in the security of these cards before they could be exploited. There have been no reports of successful fraudulent use of the cards and the addition of more security features to the passport card was not in response to any threats or vulnerabilities and should further strengthen the card against fraud. State and FDL inspected counterfeit second generation BCCs that were intercepted and found that none of the security features or personalization codes had been compromised. However, by not following a structured process for assessing the security features of the passport card prior to issuing the second generation passport card, State missed an opportunity to identify and address any potential vulnerabilities of the passport card’s design to resist fraudulent use. In response to our 2007 recommendation, State created a new position in the Bureau of Consular Affairs responsible for the coordination of the efforts of various State organizations involved in designing and ensuring the security of documents issued by Consular Affairs—the Forensic Document Design and Integrity Coordinator. Because this position was created in September 2009, the coordinator was not involved in the development process of the first-generation passport card or the second generation BCC card and was only minimally involved in the development process of the second-generation passport card—only providing input to the post-production processes. The inspection of passport cards and BCCs at POEs is a key element in preventing the fraudulent use of these documents. Inspection officers rely on interviews and observations of travelers and the examination and verification of documents using CBP border inspection systems to detect fraud. To aid in the inspection of passport cards and second generation BCCs, CBP deployed RFID readers and new software in vehicle lanes at land ports of entry. However, the limited amount of time officers have to conduct inspections restricts the use of security features on passport cards and BCCs to just a few visual and tactile features. Greater use of biometrics of travelers presenting BCCs could provide additional verification that the BCCs are valid and belong to the travelers presenting the documents, helping to address imposter fraud. Further, while CBP officer training on the passport card and BCC was timely, the provision of exemplars to the ports of entry for training purposes is still lacking. The CBP port director—responsible for supervising and directing all work activities at POEs—of the POEs we visited along the Northern border indicated that the POEs there did not have exemplars of either card. Without exemplars available during training, these officers were unable to fully familiarize themselves with the look and feel of the security features in these documents before inspecting them. CBP officers in primary inspection rely on interviewing and observing travelers, visually and manually examining documents, and accessing cardholder information, such as the traveler’s name and photo, in CBP border inspection systems to detect fraudulent passport cards and BCCs. CBP officers observe travelers’ demeanor, question them about their travel, and compare travelers with biographic data and photos on travel documents and in CBP inspection systems to help them detect fraud. Officers inspect only a limited number of security features on travel documents due to time constraints, particularly along the southern land border where there is high traveler volume through many land border POEs. When inspecting documents, they look for signs of alteration, compare the photo and traveler, examine the biographic page and examine the look and feel of the document to determine whether it is valid. If the officer suspects fraud, they can send travelers to secondary inspection for further screening and, in the case of BCC holders, a comparison of traveler fingerprints with those stored in the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT), one of the CBP border inspection systems, to verify their identity. To aid in the inspection of passport cards, second generation BCCs, and other travel documents with vicinity RFID chips, CBP made two related technology deployments to its ports of entry. First, it upgraded the client software to its border inspection systems at vehicle and pedestrian lanes at land border ports of entry. The vehicle primary client software provides a graphical user interface for CBP officers to access U.S. visa and passport information, including the traveler’s photograph. State provides the information to CBP border inspection systems from its issuance databases: the Consular Consolidated Database for visas, including BCCs, and the Passport Information Electronic Retrieval System for passports and passport cards. Access to this information allows for better identification of fraudulent photos, biographical data alteration, or counterfeit cards. The vehicle primary client software is operational in most vehicle lanes at all but two land border ports of entry. CBP upgraded the pedestrian client software, which already provided access to visa information, to display passport information. Second, CBP deployed RFID readers in vehicle lanes at land border ports of entry that can read the RFID chips in the passport card, second generation BCC, and other WHTI-approved documents. WHTI has deployed RFID to 420 lanes at the top 46 land border POEs, which handle more than 95 percent of land border traffic. Travelers can hold up their passport card or second generation BCC when entering vehicle lanes at these POEs to allow RFID readers to read the RFID tag in the cards. The RFID system then automatically looks up traveler’s information from CBP border inspection systems and presents it to inspecting officers on the Vehicle Primary Client. CBP has installed signage in RFID reader-equipped vehicle lanes and provides WHTI tear-sheets that are available in English, Spanish, and French that instruct cardholders on how to use RFID-enabled documents, which includes passport cards and BCCs (see fig. 3). In addition, State includes a letter in Spanish with BCCs containing instructions on how to use the cards at POEs. When a vehicle enters a vehicle lane at a port of entry, the occupants can see signs instructing them on how to hold RFID-enabled documents to allow them to be read (see fig. 4). The RFID reader attempts to read any RFID-enabled documents in the vehicle. The vehicle then approaches the booth where the CBP officer inspects the occupants’ travel documents. If one or more of the documents was not read, whether because there was a read failure or one or more of the documents are not RFID-enabled, the CBP officer can read the RFID tags of any RFID-enabled document with an RFID reader at the booth, read the machine readable zone of any valid travel document with a document reader in the booth, or manually look up travelers’ information using the data printed on the documents. In pedestrian lanes, a traveler presents his or her travel document to the CBP officer who can inspect it and look up the traveler’s information by either electronically reading the machine readable zone of the travel document with a document reader or manually looking up the travelers’ information. The officer can then compare the information on the travel documents with information retrieved from CBP border inspection systems and with the traveler being inspected to determine if they may be admitted or should be referred to secondary inspection for further questioning and document examination. Officers in primary inspection—the first and most critical opportunity at U.S. ports of entry to identify individuals seeking to enter the United States with fraudulent travel documents—are unable to take full advantage of the security features in passport cards and BCCs due to the limited use of technology in primary inspection. In our prior work examining the inspection of travel documents at POEs, we found that, due to time constraints and the large volume of travelers, primary officers inspect only a limited number of security features on travel documents and only electronically read travel documents to query records in CBP border inspection systems when deemed appropriate for the inspection situation, given the local traffic flow and traveler wait times. CBP officers often rely on a few visual and tactile security features of the passport cards and BCCs—such as raised printing and the embossed seal—in addition to their interviews to identify fraudulent use of the documents. When visiting POEs along the Northern and Southern borders, CBP port directors told us that they are able to authorize less than 100 percent handling of travel documents and the port director of the POEs we visited on the Southern border told us he can authorize less than 100 percent electronic reading or manual lookup of travel documents during times of heavy traffic to mitigate long waits, although this happens only rarely in the POEs we visited on the Northern border. During our visits to POEs on the Northern and Southern borders, we observed 100 percent handling and electronic reading of travel documents. However, in 2008, only about 49 percent of travel documents were machine read in vehicle primary inspections, while in 2009 about 63 percent were read. Part of this increase may be attributed to the decrease in vehicle traffic during that period. According to CBP crossing estimates for vehicle lanes indicate, there was about a 10 percent decrease in vehicle traffic across the border between 2008 and 2009. In our prior work examining the security of BCCs, we found that DHS was not fully utilizing the biometric features of the BCCs—that is fingerprint data—and recommended that DHS develop a strategy for better utilizing these features. At the time, we found that only a small percentage of travelers with BCCs are referred to secondary inspection where their fingerprints can be compared to those in US-VISIT. These checks are usually performed only if a primary officer determines travelers are traveling beyond the geographic limits or exceeding the number of travel days allowed for use of the BCC, or if there are concerns about the traveler. The use of biometric checks of travelers presenting BCCs provides additional verification that the travel documents are valid and belong to the travelers presenting the documents, helping to address imposter fraud—the most significant type of fraud associated with BCCs. In fiscal year 2009, CBP officers intercepted over 12,000 BCCs used by imposters. Even with the second generation BCC, imposter fraud is much more common than fraud cases where the card has been counterfeited or altered. In fiscal year 2009, 170 cases of imposter fraud were detected with the second generation BCC while only 12 cases of altered or counterfeit second generation BCCs were detected. While the deployment of the Vehicle Primary Client to CBP land POEs provides officers more information on BCC holders, imposter fraud remains a significant risk. In 2008, CBP developed a Mission Need Statement for U.S. Pedestrian Biometric Deployment to provide an additional security check at land border POEs, whereby existing single-print readers, which scan 1 fingerprint for comparison with the cardholders fingerprint information stored in the CBP border inspection systems, currently being replaced with 10-print readers, which scan all 10 fingerprints for comparison, in secondary inspection would be reallocated to pedestrian primary lanes to enable inspecting officers with suspicions of a BCC holder’s identity to verify the individual against fingerprint records. As of March 2010, these systems have been deployed to all 136 pedestrian lanes at POEs across the southwest border. However, CBP only has only plans to install them at select vehicle lanes at remote POEs that have both vehicle and pedestrian lanes. CBP indicated that there are operational challenges to implementing biometric verification at busy POEs, which make secondary inspection the most efficient place to perform biometric verification. Previously, we recommended that State and DHS collaborate to provide CBP inspection officers with better training for the inspection of documents issued by State, including training materials that reflect changes to State-issued travel documents and the provision of exemplars prior to issuance. State and DHS agreed with the recommendation and have taken steps to address it. For example, CBP provided training to inspection officers on the passport card and second generation BCC prior to their issuance and provides continuing information to officers on document fraud. This training is done during musters that include materials such as Fraudulent Document Analysis Unit bulletins on document security features and counterfeit documents and exemplars of the documents; as part of other training done by CBP for inspecting officers; through conferences; and through access to online information on the documents. CBP officials also indicated that they provided exemplars of the passport card and second generation BCC to all POEs to train CBP officers prior to the cards’ appearance at the POEs. However, while CBP officials at POEs we visited along the Northern and Southern borders indicated they had received training on the passport card and second generation BCC, officials at POEs along the Northern border indicated that they did not receive exemplars of either card and hence were unable to include them in their training of their officers. In our prior work, we found that the use of alerts and bulletins alone do not provide officers with an understanding of the look and feel of the actual documents. While State and DHS have taken positive steps in response to our recommendation to improve its training of officers on travel documents, the lack of exemplars at the POEs along the Northern border indicates that improvements are still needed. As State continues to update its travel documents, we continue to believe that State and DHS need to fully implement our prior recommendation to improve training of its officers on new documents prior to their issuance, which includes the provision of exemplars so that they can be used during training to better familiarize officers with the look and feel of the cards. Ensuring the integrity of passport cards and BCCs is an essential part of border security requiring continual vigilance to facilitate the travel of those entitled to enter the United States and prevent the entry of those who are not. Preventing the fraudulent use of travel documents requires a combination of well-designed documents with layered security features and an inspection process that utilizes these security features. A well-designed document has limited utility if inspection officers do not utilize the available security features to detect attempts to falsely enter the United States. Although designs for the passport card and the second generation BCC generally meet or exceed standards and guidelines for international travel documents, inclusion of all security features recommended by guidance and standards for international travel documents does not guarantee that the security features are of sufficient quality and are designed to ensure the overall security of the cards. State’s development process could be improved to better assess the security of its cards and to fully address problems and issues found during the testing and evaluation of its cards, which could provide greater assurance that State has secure, well- performing documents. We have previously recommended that State periodically assess the security features when redesigning its travel documents. It did not do so when redesigning the passport card. By conducting such an assessment, State potentially could have identified and addressed any vulnerabilities of the passport card’s design to resist fraudulent use. State has taken actions to conduct such assessments in future redesigns, which, if effectively implemented, should better position State to identify vulnerabilities in its travel documents’ abilities to resist fraud before they can be exploited. Security assessments and testing can provide the added assurance that the cards meet security requirements. However, State did not fully assess or test the security features incorporated on the passport card or the second generation BCC. Although State performed testing and evaluation on prototype passport cards, it did not test and evaluate the final designs for the passport card or second generation BCC, nor did it test and evaluate its recent redesign of the passport card. Further, while State addressed most problems found during its testing, it either did not fully address the issues and recommendations or it did not fully document its decisions for not doing so. More fully conducting testing of the passport card and BCC and addressing identified problems would provide State with a fuller understanding of the overall security and performance of the cards and greater assurance that its cards have been produced with adequate security. CBP officers at many U.S. ports of entry face time constraints in processing large volumes of people and therefore rely on a few visual and tactile security features of passport cards and BCCs—such as raised printing and the tactile Great Seal—in addition to their interviews, to identify fraudulent use of these documents. To assist officers in the inspection of passport cards and BCCs, CBP deployed systems to its POEs that enable the reading of the RFID chips in the cards and display information about the card holders to the officers during inspection. Further, CBP has deployed fingerprint readers in primary inspection of some of its pedestrian lanes, which could help officers identify imposters fraudulently using BCCs. State and DHS have taken steps in response to our prior recommendation to improve its training of officers on travel documents. However, the conduct of training without passport card or BCC exemplars at the POEs we visited along the Northern border indicates that improvements are still needed. As State continues to update its travel documents, we continue to believe that State and DHS need to fully implement our prior recommendation to improve training of its officers on new documents prior to their issuance, which includes the provision of exemplars so that they can be used during training to better familiarize officers with the look and feel of the cards. To ensure the designs for the passport card and BCC physical security features adequately mitigate the risk of fraudulent use, we recommend that the Secretary of State take the following two actions to improve the development process when conducting future redesigns or updates to the passport card or BCC: Fully address any issues or problems encountered during testing, including the documentation of reasons for not addressing any of them. Fully test or evaluate the security features on the cards as they will be issued, including any significant changes made to the cards’ physical construction, security features, or appearance during the development process. We provided draft copies of this report to the Secretaries of State and Homeland Security for review and comment. We received written comments from State and DHS, which are reprinted in appendices II and III, respectively. We also received technical comments from State and DHS, which we incorporated into the report, as appropriate. In its comments, State concurred with our recommendations and described actions it is taking to address them. State acknowledges the importance of addressing and documenting issues encountered during testing and that complete testing should be performed on cards whenever significant changes to the physical construction and security features are made. In its comments, DHS concurred with our finding that sufficient exemplars of new documents should be available for training officer prior to new document issuance. However, DHS commented that, while the report addresses the importance and rate of physically handling travel documents, handling the passport card and BCC is not necessarily the most efficient means of verifying their validity and the cards can be verified without handling by utilizing RFID technology, Vehicle Primary Client, and other primary systems. We agree that the ability to access cardholder information automatically for the passport card and BCC can help confirm the validity of the cards. Nevertheless, primary inspection is the first and most critical opportunity to detect fraudulent travel documents and to combat this requires inspecting the physical security features, as well as using electronic systems. Both State and DHS’s FDL have indicated that physical inspection of the documents is an important part of verifying documents. DHS also commented that, while the use of biometric verification can help identify imposters, operational challenges at busy ports of entry make secondary inspection, where it is currently available, the most efficient location to perform biometric verification. We agree that the use of biometric verification in secondary inspection and in pedestrian lanes enables inspectors to use fingerprint biometrics to verify the identity of the cardholder. However, at vehicle lanes in land border POEs this capability is not available in primary inspection. Furthermore, travelers with BCCs at southern land border ports— the ports where BCC imposter fraud is most significant—are not routinely referred to secondary inspection, where they do have the capability to utilize the fingerprint records for comparison, thus inspectors are not making full use of the biometric information available for BCCs. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies to interested congressional committees and the Secretaries of State and Homeland Security. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-4499 or [email protected]. Contributors to this report include Richard Hung and Maria Stattel. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To determine how effectively State’s development process for the passport card and second generation BCC mitigates the risk of fraudulent use, we interviewed officials from State’s Bureau of Consular Affairs, U.S. Customs and Border Protection (CBP), and the Forensic Document Laboratory (FDL) in DHS’s U.S. Immigration and Customs Enforcement (ICE). We identified applicable standards and guidelines for international travel cards. We interviewed State and DHS officials on the designs for the security features of the passport card and BCC and assessed them against the applicable standards and guidelines that we identified, including standards and guidelines from DHS, the International Civil Aviation Organization (ICAO), and the Security and Prosperity Partnership (SPP). We also reviewed the results of testing and evaluation of the prototype passport cards and how State and DHS used these results because including all security features recommended by guidance and standards for international travel documents does not guarantee that the security features are of good enough quality and designed well enough together to ensure the overall security of the cards. Testing and evaluation was conducted by the National Institute of Standards and Technology (NIST), FDL, CBP, the Bank of Denmark, and Sandia National Laboratory. Finally, we interviewed officials at the Tucson Passport Center to understand and observe how second generation BCCs are personalized. To determine how CBP officers use the security features of passport cards and second generation BCCs to prevent fraudulent use at land ports of entry, we interviewed officials from CBP and reviewed CBP policies, procedures, guidance, and training documents regarding the inspection of travelers presenting passport cards and second generation BCCs for the purpose of entry to the United States, including the use of the cards’ physical security features and cardholder information retrieved from CBP border inspection systems. We conducted site visits to two POEs along the Southern border and three POEs along the Northern border to interview CBP officials about training and inspection procedures, as well as observe the inspection process of travel documents to understand how CBP officers use the physical security features and DHS database information to verify the eligibility of a traveler presenting a passport card or BCC to enter the United States. To assist in selecting these locations, we devised the following selection criteria: RFID Reader in Primary Inspection – First we identified the 41 POEs where CBP planned to install RF readers by June 30, 2009. Volume of Passport Cards and Border Crossing Cards – We considered POEs inspecting higher volumes of passport cards and BCCs than other POEs. Nearby Ports without RFID Readers – We considered POEs that had nearby POEs without RFID readers within a 2-hour drive for northern POEs and a 3-hour drive for southern POEs. Geographic Location – We considered geographic locations ensuring that we include one POE along the border with Mexico and one along the border with Canada. Pedestrian Crossing – We considered POEs on the southern border that had pedestrian crossings, as well as vehicle crossings. In determining potential locations to visit, we considered all of the criteria categories together in making our selections. While the information gathered during these site visits is not generalizable across all land POEs, they did provide insight into the inspection policies and procedures, as well as CBP officer training, for passport cards and second generation BCCs. We conducted this performance audit from January 2009 to June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. | In July 2008, the Department of State (State) began issuing passport cards as a lower-cost alternative to passports for U.S. citizens to meet Western Hemisphere Travel Initiative requirements. In October 2008, State began issuing the second generation border crossing card (BCC) based on the architecture of the passport card. GAO was asked to examine the effectiveness of the physical and electronic security features of the passport card and second generation BCC. This report addresses: (1) How effectively State's development process--including testing and evaluation--for the passport card and second generation BCC mitigates the risk of fraudulent use? (2) How are U.S. Customs and Border Protection (CBP) officers using the cards' security features to prevent fraudulent use at land ports of entry? To conduct this work, GAO evaluated the security features of passport cards and second generation BCCs against international standards and guidance and results from testing and evaluation and observed the inspection of these cards at five land ports of entry (POE). State developed a passport card and second generation BCC that generally meet standards and guidance for international travel documents and include numerous, layered security features that, according to document security experts in the Department of Homeland Security, provide adequate security against fraudulent use. While following standards and guidance helps to ensure the security of these documents, State's development process could be improved. State addressed most problems identified during evaluation and testing; however, it did not address some of the resulting issues and recommendations or did not document its reasons for not doing so. In addition, State tested and evaluated the security of only prototypes of the passport card, which did not include key features such as the background artwork, personalization features, and other security features that were added or changed for the final passport card. Moreover, State did not test the security of the second generation BCC or the updated passport card expected to be issued in the second quarter of 2010. Fully testing the passport card and BCC and addressing identified problems would provide State a more complete understanding of the overall security and performance of its cards and a greater assurance that its cards are adequately secure. CBP officers in primary inspection--the first and most critical opportunity to identify individuals seeking to enter the United States with fraudulent travel documents--use a variety of methods to identify fraudulent documents, but are unable to take full advantage of the security features in passport cards and BCCs because of time constraints, limited use of technology in primary inspection, and the lack of sample documents for training. While CBP has deployed technology tools for primary inspectors to use when inspecting passport cards and BCCs, it could still make better usage of fingerprint data to mitigate the risk of imposter fraud with BCCs, the most common type of fraud. In addition, although CBP provided training on security features of the passport card and second generation BCC to inspecting officers prior to their issuance, the conduct of training without sample passport cards or second generation BCCs at the Vermont POEs visited by GAO indicate that improvements are still needed. State and DHS need to fully implement GAO's prior recommendation to improve training on new documents prior to their issuance, including the provision of exemplars to be used during training to better familiarize officers with the look and feel of the actual documents. GAO recommends that State fully address any problems found during testing and evaluation, including documenting the reasons for not addressing any of them, and test and evaluate the security features on the cards as they will be issued. State agreed with the recommendations. |
For over 35 years, Medicaid has operated as a joint federal-state entitlement program to finance health care coverage for certain categories of low-income individuals. Medicaid eligibility is based in part on a family’s income in relation to the federal poverty level. Federal law requires states to extend Medicaid eligibility to children aged 5 and under if their family income is at or below 133 percent of the federal poverty level and to children aged 6 to 16 in families with incomes at or below the federal poverty level. At their discretion, most states have set income eligibility thresholds that expand their Medicaid programs beyond the minimum federal statutory levels. For most populations, state Medicaid programs must offer certain benefits, such as physician services, inpatient and outpatient hospital services, and nursing facility and home health services. In addition to the benefits that are federally mandated, states may offer optional services, such as dental, physical and occupational therapy, prescription drugs, and case management services. For most children, states must provide Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) services. These services are intended to provide comprehensive, periodic evaluations of health and developmental history, as well as vision, hearing, and dental screening services, to most Medicaid-eligible children. States are required to cover any service or item that is necessary to correct or ameliorate a condition detected through an EPSDT screening, regardless of whether the service is otherwise covered under a state’s Medicaid program. Across the nation, 48 states and the District of Columbia have Medicaid managed care programs, which require approval from CMS. These managed care programs can be targeted to specific geographic areas within a state or can be statewide. As of June 2000, 36 states and the District of Columbia had Medicaid mandatory managed care programs. In such programs, certain beneficiaries may choose between at least two capitated managed care plans, and states pay prospectively for each enrolled beneficiary on a PMPM basis. As a part of their managed care programs, states can provide beneficiaries a FFS based alternative, such as primary care case management (PCCM). Under PCCM, primary care providers are paid a nominal fee to manage the care of beneficiaries, and all services received are paid on a FFS basis. The remaining 12 states have managed care programs that are voluntary for beneficiaries, including a FFS-based alternative (such as PCCM), or both. The Congress created SCHIP in 1997 as a means of providing health benefits coverage to children living in families whose incomes exceed the eligibility limits for Medicaid. Although SCHIP is generally targeted to families with incomes at or below 200 percent of the federal poverty level, each state may set its own income eligibility limits, within certain guidelines. Using the flexibility built into the statute, states’ upper income eligibility for SCHIP ranged from 133 percent to 350 percent of the federal poverty level for separate SCHIP programs as of October 2000. States have three options in designing SCHIP: they may expand their Medicaid programs, develop a separate child health program that functions independently of the Medicaid program, or do a combination of both. Fifteen states and the District of Columbia have created Medicaid expansion programs, 16 states have separate child health programs, and 19 states have a combination Medicaid expansion and separate child health component. (See app. II for a summary of states’ SCHIP design choices and app. III for states’ income eligibility levels in SCHIP and Medicaid.) While Medicaid expansion programs under SCHIP must use Medicaid’s enrollment structures, benefit packages, and provider networks, SCHIP separate child health programs may depart from Medicaid requirements, particularly with regard to benefits and the plans, providers, and delivery systems available to enrollees. SCHIP separate child health programs generally cover basic benefits, such as physician services, inpatient and outpatient hospital services, and laboratory and radiological services, and may provide other benefits at the state’s discretion, such as prescription drugs and hearing, mental health, dental, and vision services. In contrast to Medicaid, SCHIP does not require that beneficiaries have freedom to choose among providers or plans, and permits states to implement mandatory managed care; thus, states may place SCHIP enrollees in a single managed care plan without an alternative. Medicaid and SCHIP separate child health programs may differ in other respects, particularly in terms of their application requirements, eligibility determination processes, cost-sharing requirements, and periods of eligibility. Some of these differences are based in federal statute, while others are the result of federal regulations. For example, federal law has been interpreted to require that public employees determine Medicaid eligibility, while SCHIP contains no such requirement; consequently, states are currently permitted to use private contractors to determine SCHIP eligibility. Also, while federal Medicaid regulations generally do not permit cost-sharing for children, the SCHIP statute allows states to require beneficiary cost-sharing, which some states have implemented as a way to mirror private insurance and encourage appropriate use of services. (See table 1.) Medicaid and SCHIP also differ in terms of the proportion of their program expenditures that come from federal funds and in whether eligible individuals are considered entitled to the program benefits and services. State expenditures for Medicaid are matched by the federal government using a formula that results in federal shares ranging from 50 to 77 percent of expenditures, depending on a state’s per capita income in relationship to the national average. The national average federal share of Medicaid expenditures is about 57 percent. The SCHIP statute provides for an “enhanced” federal matching rate, with each states’ SCHIP rate exceeding its Medicaid rate. Federal shares of SCHIP expenditures range from 65 to 84 percent with the national average federal share equaling about 72 percent. In the Medicaid program, all eligible individuals are entitled to program benefits. No overall federal budget limit exists for the Medicaid program. In contrast, for SCHIP, federal matching for each state is limited. The Congress appropriated $40 billion over 10 years (from fiscal year 1998 to 2007), with a specified amount allocated annually to each of the 50 states, the District of Columbia, Puerto Rico, and the U.S. territories. States’ choices to operate a Medicaid expansion or separate child health program determine whether eligible individuals are entitled to receive the benefits and services offered. States opting for a Medicaid expansion under SCHIP must provide Medicaid benefits to all eligible children. The state must continue to serve those children even if its allocated federal funds are exhausted. In contrast, SCHIP separate child health programs are not entitlements to coverage or services; once federal funds are exhausted, states have the option to discontinue providing services or cover the services with other funds. Both statutory and regulatory requirements for coordination between Medicaid and SCHIP exist at the federal level. The SCHIP statute requires the program to coordinate with Medicaid, including first screening SCHIP applicants for Medicaid eligibility. On the basis of this initial screen, applications (which in most states are the joint Medicaid/SCHIP applications) are directed to either Medicaid or SCHIP, where each program is responsible for final eligibility determination and enrollment. (See fig. 1.) In addition, as of August 24, 2001, SCHIP regulations also require that state Medicaid agencies adopt a process that facilitates enrollment in a state child health program when a child is determined ineligible for Medicaid. In part because Medicaid and SCHIP eligibility represent a continuum of income levels, coordination between the programs is important. Several states have found that many families applying for SCHIP actually have incomes that qualify them for Medicaid. In addition, families may need to apply to both Medicaid and SCHIP to obtain health care coverage for all of their children because Medicaid eligibility standards can vary according to the age of the child. Table 2 illustrates for two states (Florida and Vermont), how income eligibility can—but does not always—vary by age. (App. III shows the eligibility standards for SCHIP and Medicaid in the 35 states with SCHIP separate child health programs.) Differences in Medicaid and SCHIP enrollment requirements—particularly application requirements and eligibility determination practices—can affect beneficiaries’ ability to obtain and keep coverage. To help simplify the process for applicants, 8 of the 10 states we reviewed used joint applications that had similar—but not always identical—requirements for Medicaid and SCHIP applicants. When application requirements differed, Medicaid applicants had to provide additional information or documentation. The extent and effectiveness of coordination between the programs affected applicants’ ability to obtain coverage because joint applications often were transferred between the Medicaid and SCHIP offices to ensure that applicants were enrolled in the appropriate program. Poor coordination meant that applications that were transferred or incomplete risked being delayed or denied. In two of the four states we reviewed, Medicaid applications generally took more time to process. However, different processing times could not be attributed solely to lack of coordination efforts because other factors may affect processing times as well. Once enrolled, Medicaid and SCHIP families faced different requirements for maintaining coverage, such as a more complex redetermination process for Medicaid, and premium and annual fee requirements in SCHIP. Joint Medicaid/SCHIP applications are used widely—31 of the 35 states with SCHIP separate child health programs (including 8 of the 10 states we reviewed) have them. In most states, joint applications are the primary method for applying for SCHIP; however, families applying for Medicaid and other public programs may be required to use a separate, different application form. Joint application forms have helped simplify application and eligibility determination for both programs. When an applicant is found ineligible for one program, the joint form can minimize or eliminate the follow-up needed to determine eligibility for the other program. While the 10 states we reviewed generally had similar information and documentation requirements for both programs, some differences remained with regard to income deductions, asset information, and interview requirements. With regard to income reporting, 9 of the 10 states we reviewed established identical requirements for both Medicaid and SCHIP. Some of the 10 states in our sample have taken other steps to make application requirements consistent between the programs. For example, most of the states we reviewed did not ask for information about assets or require the applicant to complete an interview. California eliminated its former requirement for an in-person interview as part of the Medicaid application process and allowed Medicaid applications to be mailed in like SCHIP applications. (See table 3.) Documenting income and income-related information has been cited as a barrier to program eligibility—but also as a means of ensuring that only eligible individuals are enrolled in the appropriate program. In particular, the need to offer documentation, such as pay stubs or proof of child care expenses, can be problematic for families. For instance, families that do not receive regular paychecks can have difficulty showing several months of pay stubs. Similarly, child care expenses can be difficult to document, particularly if an individual pays cash or with a money order. Seven of the 10 states we reviewed were consistent in requiring applicants to document their income for both programs. However, individuals in four states could report income deduction information for both programs, such as child support or day care expenses, without supplying proof. Only one state required both Medicaid and SCHIP applicants to document income deductions, while three states required documentation for Medicaid applicants but not for SCHIP. Of the 10 states we reviewed, 2 states—Florida and Michigan—had no income documentation requirements for Medicaid or SCHIP. For example, Medicaid and SCHIP officials in Michigan told us that they eliminated documentation requirements because they were a barrier to application and enrollment. Before the state eliminated the documentation requirements, Michigan officials reported that 75 percent of the applications received were incomplete because individuals failed to provide adequate documentation. Michigan eliminated income documentation for both programs and, as a result, the proportion of incomplete applications received for both programs dropped to below 20 percent. While application requirements for both Medicaid and SCHIP in the 10 states we reviewed were generally similar, they were not always identical. Where differences existed, Medicaid required more information or documentation, particularly with regard to income deductions, assets, or the need to participate in an in-person interview. For example, Colorado required applicants to report income deductions and assets for Medicaid but not for SCHIP, and New York required in-person interviews and proof of income deductions for Medicaid applicants but not for SCHIP. (See table 3.) New York’s Medicaid interview requirement was part of its facilitated enrollment strategy intended to assist applicants in completing the enrollment process. This strategy uses community-based organizations (such as hospitals, clinics, schools, and libraries) as sites where such interviews can be conducted. SCHIP applicants can also use the facilitated enrollment process for assistance in applying for the program, but they are not subject to the in-person interview requirement. The states we visited had various strategies for addressing the differences between Medicaid and SCHIP requirements on their joint applications. In California, application questions that were Medicaid-specific—such as the need for a Social Security number—were clearly marked as not required for a SCHIP applicant. Colorado joint applications, on the other hand, asked for information without differentiating between items required by one program versus another. For example, its joint application asked for information about assets, although this was only required for Medicaid, to lessen the need for additional information if an applicant appeared Medicaid-eligible. Both policies have implications for applicants—either the applicant submits information that may not be necessary or risks having to provide additional information later, which could prolong the approval process. Delayed or denied coverage often was associated with a lack of coordination between the programs and other processing issues. In particular, delays or denials were at risk when Medicaid and SCHIP applications were transferred between programs, or when applications were deemed incomplete. The amount of risk depended on how closely the programs coordinated. Generally, states that had identical Medicaid and SCHIP application requirements and that maintained geographically close or colocated eligibility determination offices for both programs, reduced the risk of delayed or denied coverage. However, different application requirements for Medicaid and SCHIP, as well as poor coordination between the programs could delay coverage for families. In two of the four states we visited where we could obtain comparable data, processing Medicaid applications took longer than for SCHIP; however, longer processing times could be due to a variety of factors besides differences in application requirements and insufficient coordination. Increased coordination between Medicaid and SCHIP was important in part because joint applications were often transferred between programs. Application and eligibility determination processes for Medicaid and SCHIP include an initial eligibility screen for Medicaid and a final eligibility determination in the appropriate program. Across the four states we visited, the initial eligibility screening generally took place when an applicant submitted a joint application to a SCHIP processing location. SCHIP eligibility determination officials were responsible for performing the initial screen; applications deemed potentially Medicaid-eligible were typically sent to the Medicaid office in the county where the applicant resided, while those deemed potentially SCHIP-eligible remained at the SCHIP office for final eligibility determination. In the four states we visited, the proportion of joint applications transferred between the programs was substantial. For example, Michigan officials reported that one-half of the applications submitted to SCHIP were determined to be potentially Medicaid-eligible and were forwarded to Medicaid, and California SCHIP officials estimated that about 30 percent of the applications received by mail were eligible for Medicaid and thus required transfer. Applications could also flow in the opposite direction. For example, SCHIP application processing sites for Colorado and Michigan each reported that about 20 percent of their applications were transferred from county offices that determine Medicaid eligibility to the SCHIP processing location. Colorado officials estimated that, although average times were not available, such transfers could take anywhere from 2 weeks to 6 months. Application transfers took less time if the program offices were geographically close or colocated. For example, Michigan established a state-operated Medicaid eligibility determination office in the same building as the SCHIP enrollment contractor responsible for processing joint applications. At this SCHIP processing center, joint applications that appeared Medicaid eligible were to be transferred immediately to this Medicaid office instead of being sent to various county Medicaid offices for eligibility determination. When joint applications ask for different information for Medicaid and SCHIP, applications transferred between programs can be considered incomplete, which will delay processing until the needed information is supplied. For instance, if required Medicaid information, such as a Social Security number, is missing from a joint application, Medicaid processing can be delayed because the application is incomplete. When applications were incomplete for these or other reasons, it increased the likelihood of follow-up and often prolonged the completion of the eligibility determination process. For example, community assistance workers in California told us that families who were required to supply additional information or documentation did not always return to complete the application process, and many applications were ultimately denied because they remained incomplete. California officials noted that it is unknown whether these families were deterred by the requirements or they did not follow through because they believed they were not eligible for the program. While incomplete information on applications resulted in some denials, states varied in the extent to which they could provide data on denials. For example, California and Colorado were able to provide data on SCHIP denials that resulted from incomplete information. California indicated that 27 percent of applications received are denied; of these, almost half were denied because of incomplete information. In 2000, Colorado reported that 31 percent of all applications received were denied because they were incomplete. Beginning January 2001, however, the state changed its application, which an official told us reduced the percentage of denials due to incomplete applications to about 24 percent. In contrast, Michigan indicated that less than 3 percent of applications were denied for incomplete information. A Michigan official attributed this low denial rate to the state’s policy to minimize the amount of required documentation for both Medicaid and SCHIP, which has reduced the number of applications that are incomplete and require follow-up. Officials gave us examples of poor coordination between the programs that resulted in delayed coverage or inconvenience to families. In California, application assistants reported that SCHIP coverage could be denied if the family had not been promptly taken off Medicaid’s rolls after becoming ineligible. For example, when a Medicaid family’s income rose enough to make the family ineligible for Medicaid but eligible for SCHIP, as long as the family was still recorded as enrolled in Medicaid, its SCHIP application would be denied. Other difficulties could occur if program eligibility information was not provided to the family. For example, some Colorado families that were denied Medicaid were not informed that their applications had been sent to SCHIP and only discovered they were eligible for SCHIP when they received a notice that SCHIP premiums were due. Michigan has made efforts to improve coordination between the programs by avoiding repeated transfers of the same application that occurred when Medicaid and SCHIP eligibility workers disagreed on an applicant’s eligibility. To address this, the state developed a policy in which Medicaid and SCHIP eligibility workers accept each other’s calculations for purposes of determining program eligibility. To ensure that only eligible individuals are enrolled in the appropriate program, the state checks applications for calculation errors. If any problems consistently occur with workers from either program, the state conducts eligibility worker training to minimize the incidence of errors. Differences in the application requirements and processes could affect how long it took children to obtain coverage in the two programs. However, only SCHIP offices were able to provide information on application and eligibility determination processing times in all four states; for Medicaid in these states, comparable processing times were only available in Colorado and Michigan. In these two states, Medicaid application and eligibility determination processing generally took longer than SCHIP. For example, Colorado reported statewide average processing times that were longer for Medicaid (38 days) than for SCHIP (14 days for a completed application and 30 days for those requiring follow-up). Michigan reported that average processing times were 19 days for Medicaid and 8 days for SCHIP. While differences in processing times could be affected by poor coordination, other factors can contribute to Medicaid’s longer average processing times. For example, the Medicaid eligible population includes adults and individuals with special needs in addition to children, which also can affect how quickly applications are processed. States may allow families to receive covered services while applications are being processed by adopting a presumptive eligibility policy, an option available to states under both programs. Presumptive eligibility allows a child to receive coverage immediately while eligibility determination is in process. Nationally, however, few states have opted for presumptive eligibility in their Medicaid and SCHIP programs. As of July 2000, five states had adopted and implemented presumptive eligibility in their Medicaid programs—Massachusetts, Nebraska, New Hampshire, New Jersey, and New Mexico, while three states—Massachusetts, New Jersey, and New York—had adopted and implemented presumptive eligibility for SCHIP. A Michigan official told us that although the state has allowed health plans to adopt presumptive eligibility, none of the plans had done so as of May 2001. Once enrolled, Medicaid and SCHIP families faced different requirements for maintaining coverage. SCHIP children were generally guaranteed a longer period of eligibility regardless of changes in income or family size, while Medicaid children could lose coverage sooner because of requirements to report such changes. Also, Medicaid enrollees faced a more complex redetermination process than SCHIP children did. In contrast, SCHIP children risked losing coverage for their families’ failure to pay required premiums or enrollment fees, while Medicaid generally did not have such cost-sharing requirements. The four states we visited generally required redetermination of eligibility after 12 months for both programs. To maintain coverage during the eligibility period, two states—Michigan and Colorado—required Medicaid families to report any significant changes, such as income or family status, and the families could lose coverage if changes made them ineligible. In contrast, SCHIP families in these two states had “continuous eligibility,” meaning they remained covered for the full 12 months regardless of changes in income or family status. New York did the opposite: Medicaid families had continuous eligibility, while SCHIP families did not. At the end of the coverage period, the programs redetermine enrollees’ eligibility for coverage. Medicaid families in California, Michigan, and New York faced a more complex redetermination process than SCHIP families. For example, to begin Medicaid redetermination, Michigan mailed families a new Medicaid application, but it was a 10-page form, not the 4-page joint application. In contrast, the state’s SCHIP beneficiaries were mailed a summary of the information on their last application and asked to update information that had changed. In New York, families completed redetermination forms for both Medicaid and SCHIP, but Medicaid again required an in-person interview. In contrast, Medicaid redetermination in Colorado may be less burdensome than SCHIP redetermination, depending on the information the state is able to collect before contacting the family. The state first searches other program files, such as Food Stamps and Temporary Assistance for Needy Families (TANF), to determine whether it already has the necessary application information. If the state does not find the information with this process, it sends families a redetermination form that essentially has the same information requirements as the joint application. For SCHIP redetermination, families must submit another joint application. While Medicaid, under federal law, generally does not allow premiums or fees for children under age 18, the SCHIP legislation permits states to require limited cost-sharing. SCHIP families in the four states we visited faced varying degrees of risk of losing coverage for failure to pay monthly premiums or annual enrollment fees. The percentage of children who lost SCHIP coverage because of their families’ failure to pay premiums ranged from 0 percent in Colorado to 10 percent in Michigan. (See table 4.) Differences in the plans and physicians that participate in Medicaid and SCHIP and in payments the programs make to these plans and physicians have implications for beneficiaries’ choices and access to care. In the 10 states we reviewed, SCHIP often required enrollees to join a managed care plan and sometimes did not provide a choice of plans. In contrast, Medicaid beneficiaries had a choice of at least two capitated plans in locations offering managed care or could receive care on a FFS basis, including through PCCM. However, having such choices did not necessarily mean greater access to providers. For example, FFS options do not necessarily provide greater access to physicians than managed care plans do, since physicians may choose to limit participation or not participate in Medicaid. Similarly, one program may have a number of smaller plans, while larger plans with more extensive provider networks may not participate in the program. Payment disparities between Medicaid and SCHIP also had the potential to affect access to care. In two states where comparable data were available, Medicaid FFS payments to physicians for children’s preventive services were lower than the rates physicians were paid for the same services in SCHIP. We also compared Medicaid and SCHIP physician fees with those of Medicare and found Medicaid fees consistently lower in all four states, while our comparison of SCHIP and Medicare fees showed a less consistent relationship. Comparisons of capitation rates were difficult because of differences in the benefits included within these rates. In one state with comparable benefits covered by the capitation rate, SCHIP paid more than Medicaid. In the remaining three states, capitation rate comparisons were not feasible because of differences in the benefits or populations covered, or both. In terms of the broad choices available—obtaining health care through FFS or enrollment in a managed care plan—families with children in Medicaid generally had more choice than SCHIP families. In the 10 states we reviewed, Medicaid generally offered families a choice of receiving services on a FFS basis; selecting between a capitated managed care plan and FFS, including PCCM; or choosing from at least two capitated plans. Across the nine states with capitated managed care, enrollment of Medicaid beneficiaries in capitated plans ranged from 4 percent to 75 percent. In contrast, virtually all children enrolled in SCHIP in 8 of the 10 states were enrolled in capitated managed care plans; the remaining two states offered only FFS care. (See table 5.) Medicaid beneficiaries’ choices within a state depended on where they lived. The Medicaid programs in seven states—California, Colorado, Michigan, New York, North Carolina, Pennsylvania, and Utah—mandated that certain Medicaid beneficiaries enroll in capitated health plans, but the extensiveness of mandatory enrollment within a state varied greatly. In certain areas of these states, enrollment in a capitated Medicaid plan was mandatory for most children: in 22 of 58 counties in California, in urban areas of Colorado, in 73 of 83 counties in Michigan, in 16 of 57 counties in New York and in parts of New York City, in one county in North Carolina, in the Pittsburgh and Philadelphia areas of Pennsylvania, and in 4 urban counties in Utah. While enrollment was mandatory in these locations, Medicaid beneficiaries could still choose among two or more capitated plans. For example Medicaid beneficiaries could choose among 9 capitated plans in Wayne County, Michigan, and among 13 to16 plans in New York City, depending on the area in which they live. SCHIP beneficiaries generally had less choice between managed care plans and FFS than Medicaid beneficiaries, and these choices also depended on where the beneficiaries lived. Four of the states we reviewed with capitated managed care plans in SCHIP—Colorado, Florida, New York, and Pennsylvania—had geographic regions in which the SCHIP program offered a single managed care plan and no FFS option. In addition, SCHIP children throughout Kansas were enrolled in the single available plan in their area and did not have a FFS option, while Medicaid children were enrolled in either a PCCM or a capitated plan. While SCHIP children did not always have a choice of FFS, this did not mean that choices were necessarily limited. For example, California SCHIP officials noted that in the five counties with the largest enrollment (over 60 percent of the SCHIP enrollment statewide), SCHIP beneficiaries have between 7 and 9 health plan choices. Similarly, in New York City, SCHIP beneficiaries have between 10 and 15 plan choices, depending on where they live. The degree to which health plans and physicians participated in both Medicaid and SCHIP varied among the 10 states we reviewed. Several states, such as Colorado, Kansas, New York, and Utah, reported that generally the same health plans participated in both programs, but in Florida, Michigan, and Pennsylvania, there was limited overlap between the health plans participating in Medicaid and those participating in SCHIP. (See table 6.) This difference was especially pronounced in Michigan, where 80 percent of SCHIP beneficiaries were enrolled in a single capitated plan that did not participate in Medicaid and that contracted with over 95 percent of the physicians in the state. Michigan officials told us that in one quarter, 27 percent of children that reapplied for SCHIP were eligible for Medicaid; to the extent that these children were enrolled in the plan that did not participate in Medicaid, the transfer to Medicaid would require that they select a new health plan. When plans do not participate in both programs, continuity-of-care problems can arise as beneficiaries shift between programs because of changes in family income or children’s ages. For example, because Medicaid eligibility changes with a child’s age in all 10 of the states we reviewed, a child may have to move from Medicaid to SCHIP at certain ages even when family income remains constant. (See app. III.) Losing eligibility in one program and becoming eligible for the other can therefore mean joining a new plan and possibly seeing a new physician. In addition, a family with more than one child could have children enrolled in each program, so having the same providers in both programs would make obtaining health care easier for the family as a whole. To facilitate continuity of care, a few states reported taking action to ensure that plans and physicians participated in both Medicaid and SCHIP. For example, in 1998, New York began requiring that new plans participate in both programs and that existing plans serve both Medicaid and SCHIP in any new service areas. Similarly, Colorado required managed care plans contracting with SCHIP to be willing to contract with Medicaid, and it has allowed only one exception to this requirement. Colorado state officials reported that they also intend to request that health plans submit Medicaid and SCHIP physician networks for review so that the state can independently determine the degree of participation in both programs. The remaining six states we reviewed with Medicaid and SCHIP capitated programs did not require health plans to participate in both programs. However, officials in one of the six states—Kansas—said that in the future they intend to require plans’ participation in both Medicaid and SCHIP. Neither requiring health plan participation in both programs nor having FFS options can guarantee, however, that the two programs will have the same physicians, since physicians may choose not to participate in one or the other program or plans may establish different physician networks for each program. Medicaid and SCHIP officials in the 10 states seldom were able to report whether physicians participated in both programs—and the extent of their participation. A few states-such as Michigan and New York noted that their state insurance departments were responsible for reviewing network adequacy. In most cases, however, states did not have the data needed to compare physician participation in both Medicaid and SCHIP, particularly where a significant portion of care was provided by capitated plans. Colorado officials noted that provider data systems in Medicaid and SCHIP were not comparable and that comparisons also would be difficult because provider participation changes frequently within and between networks. Payment rates—whether they are physician fees or capitation rates to health plans—can affect the degree to which physicians and health plans participate in Medicaid and SCHIP, and thereby affect beneficiaries’ choices and access to care. The relative fees paid by different insurers— Medicare, Medicaid, SCHIP, and private health plans—can also affect providers’ willingness to participate. Nationally, low Medicaid physician fees and physician participation have been long-standing areas of concern. In a recent national survey, pediatricians cited low fees as one of the most important factors in their decision to limit participation in Medicaid. In three of the four states we visited—California, Colorado, and Michigan— the percentage of pediatricians accepting Medicaid patients was below the national average of 67 percent. Some plans and physicians have demonstrated their dissatisfaction with Medicaid’s fees by taking legal action. In New York, for example, two provider groups recently initiated lawsuits that resulted in increases in Medicaid dental fees, and physician fees for office visits were increased from $7 to $30. In both cases, these were the first Medicaid fee increases in more than 30 years. Across the four states we visited, Medicaid fees were consistently lower than Medicare fees for the same preventive services for children, while SCHIP and Medicare fees had a less consistent relationship in the two states where comparable data were available. Medicaid fees ranged from 29 percent to 61 percent of what Medicare would pay for selected preventive medical services for children. SCHIP fees as a percentage of Medicare fees varied, with two large health plans in California paying 44 to 72 percent of what Medicare would pay and one large health plan in Michigan paying 103 to 124 percent of what Medicare would pay. (See table 7.) In comparing Medicaid and SCHIP fees for the same children’s preventive medical services, Medicaid fees in two states—California and Michigan— were consistently lower than what physicians were paid for the same services in SCHIP. Medicaid fees were 46 percent to 58 percent of what one dominant health plan in Michigan paid SCHIP physicians and 83 percent of what a large plan in California paid SCHIP physicians. (See table 8.) Just as physician fees can affect physician participation, capitation rates can affect plan participation. Capitation rates can be difficult to compare, however, because the PMPM rates do not always encompass the same benefits. In Michigan, Medicaid capitation rates were lower than SCHIP rates by $26 PMPM, even though the two programs contracted for essentially the same services. In California, differences in benefits and in the populations included in the rates complicated rate comparisons. Medicaid’s capitation rate included both adults and children, while SCHIP’s rate was limited to children. In the remaining two states, the benefits were not comparable between the two programs, which precluded any conclusions regarding the comparability of capitation rates. (See table 9.) Although states have a significant amount of flexibility to design their Medicaid and SCHIP programs, differences in enrollment policies have a bearing on how easily children gain and retain access to health care. Differing application requirements and processing times can lead to delayed coverage—and in some cases, to no coverage—if families find the application process too difficult to complete. Well-coordinated programs, however, can minimize the effect of such differences and facilitate enrollment and continuity of care for children. Differences in provider participation and in the relative payment rates also have implications for children's access to health care. Few states, however, could assess the degree to which the same physicians were available to both Medicaid and SCHIP children. Since physicians decide whether to participate in Medicaid and SCHIP partly on the basis of the payment rates, lower Medicaid payments relative to other payers continue to be a source of concern, although some states have recently increased Medicaid provider payments. While comparing payment rates in a managed care environment is often complicated by differences in covered benefits, differential rates between the two programs can affect plans’ and physicians’ willingness to participate and, in turn, beneficiaries’ access to care. We provided the Secretary of Health and Human Services an opportunity to comment on a draft of this report. In its comments, HHS generally agreed with our concluding observations that differences in enrollment policies, provider participation, and relative payment rates in Medicaid and SCHIP can have implications for program enrollment as well as access to care. HHS expressed uncertainty, however, about the degree to which our concluding observations provide a national assessment of enrollment and payment policies. The report notes throughout that our findings on enrollment policies and provider participation were based on the experience of 10 states and that our comparisons of payment rates were limited to 4 states. Our intent was not to generalize nationwide, but to illustrate how selected states are addressing challenges that other states might also face in administering their Medicaid and SCHIP programs. HHS noted that the influence of relative reimbursement levels on physician and dentist participation in the Medicaid program is an important policy consideration. It expressed concern, however, about comparing Medicare physician fees to Medicaid fees for selected pediatric preventive medical services because of differences in the populations eligible for these programs. We made this comparison for several reasons. First, while Medicare is a federal health insurance program primarily for the elderly and persons with disabilities, its fee schedule includes fees for pediatric medical services. Second, both public and private health care insurers often base their payments to physicians on the Medicare fee schedule. For example, in California, Medicare payment levels were used as a benchmark for revisions to the Medicaid fee schedule in August 2000. Finally, research on Medicaid payment frequently considers Medicare fee schedules as a point of comparison for Medicaid rates. While HHS also suggested that a comparative analysis of payment data from commercial plans would be helpful, such an analysis was beyond the scope of this review. Finally, HHS commented that findings from the American Academy of Pediatrics on physician participation and payments—noted in this report— might warrant further investigation by GAO. We agree that additional analysis of children’s access to care and payments to physicians in both Medicaid and SCHIP is warranted, and are continuing to address these issues in other work. We also provided a copy of our draft report to Medicaid and SCHIP officials in the 10 states included in our analysis. We received comments from the Medicaid and SCHIP programs in California, Colorado, Kansas, Michigan, and Pennsylvania. We also received comments from the Medicaid program in Alabama, and the SCHIP programs in North Carolina and New York. Several states, including Michigan, and Pennsylvania, commented that differences in health plan participation in their Medicaid and SCHIP programs did not necessarily mean that the same physicians do not participate in both programs. We agree that physician participation can be similar even when health plans differ; however, states generally could not provide documentation of the extent of physician participation in both programs. California and Colorado also commented on the difficulty of making capitated payment comparisons between the two programs. We agree that it is difficult to compare Medicaid and SCHIP capitated rates, particularly when program benefits or populations differ. As a result, we noted benefit and population differences throughout the report and did not draw conclusions about comparative payment rates where such differences existed. HHS and the states also provided technical and clarifying comments, which we incorporated where appropriate. (HHS’ comments are included in app. IV.) As arranged with your office, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Administrators of CMS and HRSA, and other interested parties. We will make copies available to others upon request. If you or your staff have any questions regarding this report, please contact me on (202) 512-7114 or Carolyn Yocom at (202) 512-4931. Key contributors to this report are listed in app. V. The objectives of this report were to analyze, for Medicaid and SCHIP in selected states, differences in (1) enrollment requirements, particularly application and eligibility determination practices, and (2) health plan and physician participation and payments to plans and physicians. With regard to both objectives, we were also asked to consider whether differences between Medicaid and SCHIP might have implications for children’s access to care. To do this, we conducted telephone interviews of Medicaid and SCHIP state, county, and private sector officials responsible for Medicaid and SCHIP administration in 10 states. We visited four states, and we analyzed data from states’ programs as well as federal Medicaid and SCHIP program reports and documents. Because states’ Medicaid programs varied considerably, data collected from states did not always represent the same time frames. We asked states to provide their latest available data, which ranged from 1999 to the summer of 2001. In addition, we reviewed published studies and reports on application and eligibility determination practices, plan and physician participation, and provider payment issues in Medicaid and SCHIP. We also relied on information from our previous work. To analyze the extent of programmatic differences for the two reporting objectives, we selected 10 states that had SCHIP programs with separate child health programs. These states were Alabama, California, Colorado, Florida, Kansas, Michigan, New York, North Carolina, Pennsylvania, and Utah. With one exception, these were the same states included in our previous Medicaid and SCHIP comparison report. In selecting these states, we considered attributes of SCHIP separate child health programs, such as administrative structure and the method of providing services (fee- for-service (FFS) or managed care) compared to Medicaid programs in each state. We also selected states whose SCHIP programs had been in operation since January 1999 and represented a range of geographic locations. We made site visits to four of these states (California, Colorado, Michigan, and New York) to probe certain issues more deeply and obtain the multiple perspectives needed. We selected these four states primarily because of their geographic distribution, the varying sizes of their Medicaid and SCHIP enrollments, and their different program administration structures. For example, the Medicaid program in California operates at the county level, while SCHIP operates statewide; in contrast, Michigan operates both Medicaid and SCHIP out of the same state agency. During the four site visits, we interviewed representatives of programs— including state, county, and private sector officials—as well as a wide range of groups, including state Medicaid and SCHIP directors and their staffs; managed care plan officials; local organizations responsible for assisting families with applications; contractors and other staff responsible for determining eligibility and for enrolling children in Medicaid and SCHIP; physician organizations, such as local chapter officials of the American Academy of Pediatrics; and child health advocacy organizations. We also obtained documentation and data from states on application and eligibility determination, plan participation, and plan and physician payments. To compare application requirements and eligibility determination practices under Medicaid and SCHIP, we analyzed application requirements from the 10 states. Our site visits to four states allowed us to obtain a more in-depth understanding of how Medicaid and SCHIP programs at the state level determined whether applicants were eligible; how the two programs referred ineligible applicants; and how both programs enrolled beneficiaries into managed care plans, where pertinent. In each state we visited, we obtained data and conducted interviews with state, plan, physician, and community groups on Medicaid and SCHIP procedures and requirements, time frames, and coordination efforts. To obtain information about health plan arrangements and provider participation in the two programs, we conducted semistructured telephone interviews with Medicaid and SCHIP directors or their key staff in the 10 states. These interviews allowed us to capture variations between Medicaid and SCHIP both within and across states. In the four states we visited, we also obtained more extensive information about the degree of beneficiary choice of health plans and physicians in each program, in urban and rural areas and under Medicaid managed care programs. In addition, we obtained data on the number of plans, degree of plan participation in each program, enrollment by plan, and provider overlap. Finally, we collected and analyzed information and data on Medicaid and SCHIP payments to managed care plans and FFS providers in the four states we visited. In analyzing payments, we focused on making comparisons within a state regarding Medicaid and SCHIP (1) FFS payments to physicians for services for Medicaid and SCHIP beneficiaries and (2) capitation rates to plans. Plans are paid a fixed amount per member per month (PMPM), regardless of the services provided, while under FFS, physicians are paid a specific amount for each service. We performed our work from June 2000 through July 2001 in accordance with generally accepted government auditing standards. Since many Medicaid beneficiaries are in a FFS arrangement, we compared Medicaid payments with SCHIP payments for the same preventive services for children. While SCHIP programs do not typically use FFS arrangements, we identified three dominant health plans, two in California and one in Michigan, that served significant numbers of SCHIP beneficiaries and that paid their providers on a FFS basis. We compared Medicaid payments in California and Michigan with the payments that each of these SCHIP plans paid their providers. We obtained fee schedules for pediatric medical services using selected codes from the most commonly used procedural coding system in states reporting Medicaid EPSDT services—the standard Physicians Current Procedural Terminology, 4th edition (CPT 4). (See table 10.) These CPT 4 codes were the most commonly used procedural codes for reporting Medicaid’s EPSDT services under capitated managed care programs and the second most commonly used codes for reporting these services under FFS. Managed care plans often receive different capitation rates for each risk group or category of eligible populations. For example, plans may be paid separate rates for infants and teenagers, or for Supplemental Security Income (SSI) program beneficiaries in Medicaid, who are often more costly to serve because of their complex health needs. The distribution of such population groups and the benefits offered also can differ between Medicaid and SCHIP. Because of this, comparing programs’ capitation rates to health plans required analyzing any existing differences between Medicaid and SCHIP rates based on a program’s enrollment by age, risk groups, and benefits; where possible, we made adjustments to address the differences we identified. In general, to compare capitation rates, we first excluded SSI children from Medicaid’s enrollment figures and calculated or obtained weighted average capitation rates for non-SSI children to make the Medicaid rates more comparable to SCHIP. While this approach made the Medicaid and SCHIP populations more similar, the number of beneficiaries in each age group varied by program. To adjust for these differences, we used the SCHIP program’s enrollment distribution by age in each state and applied weighted average Medicaid capitation rates, thus calculating a population- adjusted PMPM Medicaid rate that was more comparable to SCHIP. By age- adjusting the two populations, we arrived at more comparable price-to- price evaluations of Medicaid and SCHIP capitation rates. For example, in Colorado, infants (aged 0 to 1) made up 8,236 (15 percent), of all children in Medicaid, while infants composed 633, or 3 percent, of all children in SCHIP. Capitation rates differ by age groupings—with infant rates higher than rates for older children under both programs. For example, the weighted average Medicaid rate for infants in Colorado was $300 PMPM, while the weighted average rate for ages 1 to 18 was $47 PMPM. Given these differences, the weighted average monthly capitation rate for a program enrolling many infants—such as Medicaid—reflects the higher costs of these beneficiaries. This rate is not comparable to the weighted average rate for a program with fewer infants. To adjust for these differences in populations, we used SCHIP’s enrollment distribution by age in Colorado and applied Colorado’s Medicaid rates to the SCHIP enrollment distribution in order to calculate a population-adjusted PMPM rate. This gave us comparable price-to-price evaluations of Medicaid and SCHIP capitation rates in Colorado of $54 PMPM in Medicaid and $70 PMPM in SCHIP. We also obtained or calculated price-to-price evaluations of Medicaid and SCHIP capitation rates for children in Michigan and New York. (Figure 2 shows the original capitation rate for Medicaid and SCHIP, as well as the population-adjusted rate for Medicaid.) In California, the Medicaid program developed its capitation rates by eligibility groupings, not by age ranges, and so it could not provide rates for children by age. For our capitation rate comparison, we selected a “family” population grouping that best represented families with children because it included a Medicaid category of eligibility that is based on enrollment in the Temporary Assistance to Needy Families (TANF) program. Within this family rate category, capitation rates were the same, regardless of the age of the child or adult. As a result, creating comparable populations between Medicaid and SCHIP was not possible. The California capitation rates cited represent the weighted averages for Medicaid beneficiaries and for SCHIP beneficiaries in 12 counties that enrolled the majority of Medicaid- and SCHIP-eligible individuals in capitated care. States are allowed three options in designing SCHIP: expand their Medicaid program, develop a separate child health program that functions independently of the Medicaid program, or combine both approaches. (See table 11.) As of June 2001, 35 states have separate programs or combination programs separate from Medicaid. Fifteen states and the District of Columbia have chosen to create Medicaid expansion programs under SCHIP, 16 states have separate child health programs, and 19 states have programs that combine Medicaid expansion and separate child health programs. Because Medicaid and SCHIP income eligibility levels vary by age, children in the same family can qualify for different programs. (See table 12.) Using a family with two children, aged 2 and 7, and an income at 125 percent of the federal poverty level provides an example of how family eligibility can be split between Medicaid and SCHIP. In 21 of the 35 states with separate child health programs, the 2-year-old would be eligible for Medicaid, while the 7-year-old would be eligible for SCHIP. Assuming that the family’s income remains at 125 percent of the poverty level, these children would be split between Medicaid and SCHIP for 4 years, until the 2-year-old turned 6 and thus qualified for SCHIP, not Medicaid. Six states have consistent eligibility levels for all ages: four states—Connecticut, Indiana, Maryland, and South Dakota—used SCHIP Medicaid expansions, while two states— Vermont and Washington—already had eligibility levels that were consistent for all children. The remaining eight states have consistent levels for all ages with the exception of infants, which are typically covered at a higher level in Medicaid. Key contributors to this analysis were Joy L. Kraybill, JoAnn Martinez-Shriver, and Deborah A. Signer. In addition, Yorick F. Uzes contributed to the initial design and data collection, Behn Miller provided legal analysis, and Elizabeth T. Morrison assisted in writing the report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 (automated answering system). | States provide health care coverage to low-income uninsured children largely through two federal-state programs--Medicaid and the State Children's Health Insurance Program (SCHIP). Medicaid was established in 1965 to provide health care coverage to low-income adults and children. Medicaid expenditures for health services to 22.3 million children totaled $32.4 billion in 1998. Congress established SCHIP in 1997 to provide health care coverage to children living in poor families whose incomes exceed the eligibility requirements for Medicaid. SCHIP expenditures for health services to nearly 2 million children totaled $2 billion in 1999. In implementing SCHIP, states could opt to expand their Medicaid programs or establish a separate child health program distinct from Medicaid that uses specified public or private insurance plans offering a minimum benefit package. Thirty-five states have chosen SCHIP approaches that are, to varying degrees, separate from their Medicaid programs. Because eligibility for Medicaid and SCHIP can vary with a child's age, children may, at different ages, need to move from one program to the other. Access to care, therefore, is affected by the extent to which health plans and providers are available and participate in Medicaid and SCHIP. Differences in Medicaid and SCHIP enrollment requirements--particularly application requirements and eligibility determination practices--can affect beneficiaries' ability to obtain and keep coverage. To help simplify the process for applicants, eight of the 10 states GAO reviewed used joint applications that had similar--but not always identical--requirements for Medicaid and SCHIP applicants. When application requirements differed, Medicaid applicants had to provide additional information or documentation, such as proof of income or assets, or participate in interviews. Differences in the health plans and providers that participate in Medicaid and SCHIP, as well as differences in the payments they receive, have implications for beneficiaries' access to care. In the 10 states GAO reviewed, SCHIP often required enrollees to join a managed care plan and sometimes did not offer a choice of plans, while Medicaid offered families a choice of two or more plans or of care on a fee-for-service basis, including primary care case management. However, such choices did not necessarily give beneficiaries greater access to providers because plan choices may be limited to several smaller plans and may exclude larger plans with more extensive networks. |
In 2009, the Federal Reserve centralized coin management across the 12 Reserve Banks and established national inventory targets. Previously, each Reserve Bank office set and managed its own inventory levels, resulting in varying levels of inventory held relative to demand. Under the centralized approach, the Federal Reserve’s Cash Product Office (CPO) manages distribution of the coin inventory, orders new coins, and acts on behalf of the Reserve Banks in working with stakeholders, such as depository institutions. From 2008 through 2012, the combined inventory for pennies, nickels, dimes, and quarters decreased 43 percent, due, in part, to the centralized program. (See fig. 1.) In 2009, CPO also established national upper and lower inventory targets for pennies, nickels, dimes, and quarters to track and measure the coin inventory. CPO officials noted that these targets help meet their primary goal in managing the nation’s coin inventory: ensuring a sufficient supply of all coin denominations to meet the public’s demand. The upper national- inventory target serves as a signal for CPO to reduce future coin orders from the U.S. Mint to avoid the risk of approaching coin-storage capacity limits and the lower national-inventory target serves as a signal to CPO that there is a need to increase future coin orders to avoid shortages. We analyzed national inventory targets from 2009 to 2012 and found that in most cases these targets were met. In managing the coin inventory, CPO determines if coins should be transferred from an area with more coins than needed to fulfill demand or if additional coins should be ordered from the U.S. Mint. If there is an insufficient supply of coins to meet demand and transferring coins from another location would not be cost-effective, CPO orders new coins from the U.S. Mint based on its 2-month rolling forecast of expected demand. After submitting orders to the U.S. Mint, CPO may increase an order or defer shipments to later months based on updated information. In part to respond to these changes, each month the U.S. Mint produces a safety stock of coins. Our analysis found that in 2012, Reserve Bank costs related to coin management were approximately $62 million. To monitor costs related to currency management, including coins as well as notes, CPO officials said they review these costs at the national level because individual Reserve Banks may vary in their accounting for operational costs related to coins and notes. In October 2013, we found that from 2008 through 2012 total annual Reserve Bank currency-management costs increased by 23 percent at the national level. While cost information for coins and notes is available separately, CPO does not separately monitor the Reserve Bank’s coin management costs. Looking specifically at coin management costs, which include direct and support costs, our analysis found that they increased by 69 percent from 2008 through 2012. More specifically, Reserve Bank direct costs for coin management increased by 45 percent during this period, about $5 million across the 28 offices, and support costs increased by 80 percent, about $19.6 million across these offices. Direct costs include personnel and equipment. CPO officials attributed the increase in coin management costs mainly to support costs. Support costs include utilities, facilities, and information technology as well as other local and national support services such as CPO’s services. Although Reserve Bank coin management costs have risen since 2008, we found in October 2013 that CPO had not taken steps to systematically assess factors influencing direct and support costs related to coin management and assess whether opportunities exist to identify elements of its coin inventory management that could lead to cost savings or greater efficiencies across the Reserve Banks. We also found that the rates of increasing coin management costs differ across Reserve Banks. Specifically, using data provided by CPO on individual Reserve Banks’ costs, from 2008 through 2012, coin management costs increased for all Reserve Banks, with the increases ranging from a low of 36 percent to a high of 116 percent. The Federal Reserve’s 2012–2015 strategic plan includes an objective to use financial resources efficiently and effectively. In addition, according to a leading professional association that provides guidance on internal controls, as part of the internal control process, management should ensure that operations, such as managing an inventory, are efficient and cost effective, and this process includes monitoring costs and using this information to make operational adjustments. Without taking steps to identify and share cost-effective coin management practices across Reserve Banks, the Federal Reserve may be missing opportunities to support more efficient and effective use of Reserve Bank resources. To address this issue, in our October 2013 report we recommended that the Federal Reserve develop a process to assess the factors that have influenced increasing coin-operations costs and the large differences in costs across Reserve Banks and to use this information to identify practices that could lead to costs savings. We concluded that taking these actions may help the Federal Reserve identify ways to improve the cost-effectiveness of its coin management, potentially increasing the revenues that are available for the Federal Reserve System to transfer to the General Fund. The Federal Reserve generally agreed with the recommendations in our report, including the above recommendation as well as recommendations discussed below, and has developed a plan for addressing them. In response to the recommendations, the Federal Reserve also noted that it would define a new metric that measures the productivity of Reserve Bank coin operations and that will enable it to monitor coin costs and identify cost variations across Reserve Banks. We will continue to monitor the Federal Reserve’s progress in addressing our recommendations. In October 2013, we found that the Federal Reserve, in managing the circulating-coin inventory, follows two of five key inventory management practices we identified and partially follows three. Establishing, documenting, and following these key practices contributes to a more effective inventory-management system. Specifically, the Federal Reserve follows key practices for collaboration and risk management and partially follows key practices for performance metrics, forecasting demand, and system optimization. For example, it follows the key practice of collaboration because it has established multiple mechanisms for sharing information related to coin inventory management with partner entities such as depository institutions. In addition, the Federal Reserve follows the risk management key practice because it has identified sources of potential disruptions, assessed the potential impact of risk, and developed plans to mitigate risk at multiple levels of its operations. In the key practice area of performance metrics, we found that the Federal Reserve has developed some metrics in the form of upper and lower national coin-inventory targets. However, it has not developed other goals or metrics to measure other aspects of its coin supply-chain management—such as costs. Characteristics of this key practice include agencies’ identifying goals, establishing performance metrics, and measuring progress toward those goals. We concluded that establishing goals and metrics, such as those related to coin management costs, could aid the Federal Reserve in using information and resources to identify additional efficiencies. To address this issue, we recommended that CPO establish, document, and annually report to the Board performance goals and metrics for managing the circulating coin inventory and measure performance toward those goals and metrics. In its response, as noted previously, the Federal Reserve said that it planned to define a new metric that measures the productivity of the Reserve Bank’s coin operations and use this metric to monitor coin costs. In the key practice area of forecasting demand, we found that the Federal Reserve forecasts future coin demand and uses this information to make decisions, but does not systematically track the accuracy of its monthly forecasts compared to the final coin orders. Our analysis of initial monthly CPO coin orders and final orders (actual U.S. Mint coin shipments) from 2009 through 2012 indicated that initial orders were consistently less than the final orders. A leading operations management industry association that offers professional certifications recommends that forecasting results must be continuously monitored and a mechanism should be in place to revise forecasting models as needed, and that if the forecast consistently exhibits a bias, the forecast should be adjusted to match the actual demand. We concluded that taking additional steps to assess forecast accuracy could help CPO identify the factors influencing forecast accuracy and then adjust forecasts to improve accuracy. To address this issue, we recommended that CPO establish and implement a process to assess the accuracy of forecasts for new coin orders and revise the forecasts as needed. In its response, the Federal Reserve reported that in addition to implementing a more formal program for assessing new coin order forecasts, CPO has begun working to refine the accuracy of its coin forecasts. In the key practice area of system optimization, we found that CPO does not fully use available information and resources to optimize system efficiencies within the supply chain. Specifically, it does not use the range of information available to establish and track performance metrics to measure progress. Better information related to forecast accuracy and costs—such as the types of information we recommended that the Federal Reserve develop—could aid CPO in using its information and resources to identify inefficiencies and further support the interrelated key practice of system optimization. For example, the U.S. Mint’s monthly production of new coins could be more efficient with improvements to the accuracy of initial new-coin orders. In part to improve this linkage, we concluded that optimizing U.S. Mint’s and individual Reserve Bank’s operations could potentially contribute to reducing U.S. Mint or Federal Reserve costs related to circulating coins. To collect data and information on potential changes in the demand for currency, the Federal Reserve has conducted studies and outreach with groups such as depository institutions and merchants, and found a general consensus that the use of currency may decline slightly in the near term. According to the Federal Reserve, this expectation is due, in part, to an increase in alternative payment options (e.g., additional forms of electronic payments), but interrelated factors—such as technological change and economic conditions—make it difficult to predict long-term (i.e., 5 to 10 years) currency demand. According to many agency officials, stakeholders, and foreign government officials we spoke to, while there may be changes in the use of various types of payments in the coming years, the effect on currency demand is likely a gradual decline. Federal Reserve officials expect that their current procedures and approach to managing the coin and note inventory—including their forecasting and monitoring of the coin inventory targets discussed previously—will allow the agency to accommodate gradual shifts in demand. For example, to respond to increasing or decreasing demand for coins, CPO can decrease or increase coin orders from the U.S. Mint. According to the officials we met with, CPO is continually working to identify ways to streamline its processes to be more flexible and adaptable to changes, and CPO and the Reserve Banks have established plans and procedures, such as risk management plans, to address the effects associated with short-term, unexpected changes in coin and note demand. Experts we interviewed agree that well-managed currency systems are capable of handling major trend-based changes. According to inventory management experts we consulted, dependable forecasts— that take both trends and cyclical demand changes into account—are key to effectively managing a supply chain. Therefore, we concluded in our October 2013 report that combining forecasts with continual tracking of demand and inventory levels should allow the Federal Reserve to be able to adapt to any major trend-based changes in coin and note demand. As discussed earlier, this makes accurate forecasting by the Federal Reserve even more important. While Federal Reserve officials we met with indicated their current processes should enable them to adapt to gradual changes in coin and note demand, a significant and unexpected change could affect the management of the coin and note inventories. CPO officials said that if a large decline in coin usage occurs, they would adapt their management of the inventory in response. For example, if demand for coins were to decrease suddenly, leaving too many coins in circulation, the Federal Reserve would first stop ordering new coins from the U.S. Mint and would then focus on storing the excess coin inventory. Coin attrition would reduce this inventory over time, and CPO officials anticipate that they would have sufficient storage capacity available to accommodate the excess coins. CPO officials told us that inventory levels would need to be well in excess of the existing targets before they would have an effect on storage capacity and related costs. While coin terminal operators did not expect a decrease in coin demand significant enough to exceed their storage capacity, additional storage could be needed to accommodate and store the coins returned by depository institutions to the Reserve Banks if there is a substantial decrease in public demand for coins. In 2010, CPO began to develop a long-term strategic framework to consider potential changes to currency demand over the next 5 to 10 years and how this change could affect CPO’s operations. According to Federal Reserve officials, this framework is an internally focused effort to help share information, refine internal operations, and monitor trends. One component of this effort includes examining internal operations for distributing coins and processing notes as well as seeking to increase efficiency in these areas to better position the agency to adapt to future changes in demand. Conducting research is another component of this framework. For example, as part of a broader effort to look at trends in various payment types, one Reserve Bank is examining the detailed spending habits of a selection of consumers, who were asked to document their transactions and payment decisions over a period of time in a shopping “diary.” Because determining how much of the currency in circulation is being used for transactions is difficult, this type of study may help officials better understand currency use in the United States. Australian, Austrian, and Canadian officials we interviewed for our 2013 report were also exploring the potential impact of alternative payment technologies and collecting new data to inform research efforts. For example, Austrian and Canadian officials have also conducted diary studies to better understand individuals’ use of various payment options. Collecting detailed consumer-payment information through these types of studies may help officials better understand consumers’ payment and currency management habits. In conclusion, the Federal Reserve has taken steps to standardize its management of the circulating-coin inventory from a national perspective, steps that have led to improvements such as reductions in national coin inventories. The actions that it has planned to address our recommendations could potentially contribute to reducing federal costs related to circulating coins, a reduction that could increase the amount of money returned to the General Fund. While the Federal Reserve has a framework that it believes can adapt to expected gradual changes in coin demand, a significant and unexpected decrease in demand could lead to increased storage needs. Chairman Campbell, Ranking Member Clay, and members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions at this time. For further information on this testimony, please contact Lorelei St. James, at (202) 512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to the work this testimony is based on include Teresa Spisak and John Shumann (Assistant Directors); Maria Wallace; Amy Abramowitz; Lawrance Evans, Jr.; David Hooper; Delwen Jones; Sara Ann Moessbauer; Colleen Moffatt Kimer; and Josh Ormond. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Efficiently managing the nation's inventory of circulating coins helps to ensure that the coin supply meets the public's demand while avoiding unnecessary production and storage costs. This testimony is based on GAO's October 2013 report on the Federal Reserve's management of the circulating-coin inventory. It addresses (1) how the Federal Reserve manages the circulating coin inventory and the related costs, (2) the extent to which the Federal Reserve follows key practices in managing the circulating-coin inventory, and (3) actions taken to respond to potential changes in demand for currency (coins and notes). In 2009, the Federal Reserve centralized coin management across the 12 Reserve Banks and established national inventory targets to track and measure the coin inventory. However, based on GAO's analysis of Federal Reserve data, from 2008 to 2012, total annual Reserve Bank coin-management costs increased by 69 percent, and more specifically, costs at individual Reserve Banks increased at rates ranging from 36 percent to 116 percent. GAO found in October 2013 that the Federal Reserve did not monitor coin management costs by each Reserve Bank—instead focusing on combined national coin and note costs—thus missing potential opportunities to improve the cost-effectiveness of coin-related operations. Furthermore, the agency had not taken steps to systematically assess factors influencing coin management costs and identify practices that could lead to cost savings. In managing the circulating-coin inventory, the Federal Reserve followed two of five key inventory management practices GAO identified and partially followed three. For example, the agency followed the key practice of collaboration because it has established multiple mechanisms for sharing information related to coin inventory management with partner entities such as depository institutions. The Federal Reserve partially followed the key practice of performance metrics, which involves identifying goals, establishing performance metrics, and measuring progress toward goals. While the Federal Reserve had developed some performance metrics of upper and lower national coin-inventory targets, it had not developed goals or metrics to measure other aspects of its coin supply-chain management, such as costs. Establishing goals and metrics, such as those related to coin management costs, could aid the Federal Reserve in using information and resources to identify additional efficiencies. To collect data and information on potential changes in the demand for currency (coins and notes), the Federal Reserve has conducted studies and outreach with groups such as depository institutions and merchants, and found a general consensus that the use of currency may decline slightly in the near term. This expectation is due, in part, to an increase in alternative payment options (e.g., additional forms of electronic payments), but interrelated factors—such as technological change and economic conditions—make it difficult to predict long-term currency demand. In 2010, the Federal Reserve began to develop a long-term strategic framework to consider potential changes to currency demand over the next 5 to 10 years and how this change could affect operations. This effort includes, among other things, examining internal operations for distributing coins and processing notes as well as conducting research into the use of payment types to understand currency use in the United States to better position the agency to adapt to future changes in demand. GAO's October 2013 report included several recommendations to the Federal Reserve to ensure the efficient management of the coin inventory and potentially to reduce costs. These included recommendations (1) to develop a process to assess factors influencing coin operations costs and identify practices that could lead to cost-savings and (2) to establish additional performance goals and metrics relevant to coin inventory management. The Federal Reserve generally agreed with the report's recommendations and, in response, has developed a plan for addressing them. |
Through rules known as capital requirements, financial regulators set minimum levels for capital that banks and bank holding companies,securities broker-dealers, futures commission merchants (FCM), and life insurance companies hold as a cushion against unexpected losses that can result from risks faced by these firms in their business activities. Regulatory capital requirements are one tool financial regulators use to help protect customers from losses and ensure the stability of financial markets. In addition to serving these general regulatory purposes, capital requirements can affect the way the financial system functions by influencing how market participants allocate capital resources and conduct business. Capital requirements can also have competitive effects within the financial services industry, to the extent that capital requirements differ among competing financial institutions and firms. Today, regulators in all sectors have either adopted or are considering changes in capital requirements that compared to earlier approaches, more quickly and precisely respond to changes that occur in a firm’s actual risk profile. In addition, some regulators are considering more fundamental changes that would simplify capital regulation. Changes in capital regulation are being undertaken or considered in a highly dynamic financial services industry that is itself undergoing change in response to competitive pressures as well as advances in telecommunications, computer technology, and financial analysis—all of which have led to new and innovative financial products and services. This report is provided to help Members of Congress and others understand current regulatory capital requirements, developments in those requirements, issues these developments raise, and financial firms’ approaches to risk measurement. Banks, securities broker-dealers, FCMs, and life insurance companies increase the efficiency of the economy by facilitating the flow of savings to investment and providing other financial services. As discussed in chapter 3 of this report, these financial firms use capital to manage the trade-off between risks and returns in order to increase the firms’ efficiency and maximize the returns for stockholders. The capital that a financial firm holds serves a number of firm-specific purposes—chiefly to provide long-term funding of operations and to protect the firm by serving as a cushion to absorb unexpected losses. For public purposes, regulators of banks, securities broker-dealers, FCMs, and life insurance companies promulgate capital regulations that set mandatory minimum levels for capital that the firms are to hold as a cushion against unexpected losses. The specific public purposes differ somewhat among the regulators. Generally speaking, however, the financial regulators seek to protect customers of the financial firms from losses and help ensure the stability of financial markets and systems that they regulate. Chapter 2 of this report discusses the capital standards set for banks, securities broker-dealers, FCMs, and life insurance companies and the more specific purposes of each of the financial regulators in setting regulatory capital requirements. Traditionally, banks, securities broker-dealers and FCMs, and life insurance companies were engaged in mostly different businesses and faced different risks. After the stock market crash of 1929, Congress created a regulatory and industry structure that separated banking, investment banking, and other financial institutions. Banks were restricted to taking deposits, making loans, and other activities closely related to banking. Broker-dealers (the SEC-regulated portion of investment banks) were restricted to brokering securities, underwriting new security issues, and trading securities. Insurance companies continued to be regulated by the states, and their activities were limited to insurance sales and underwriting. As discussed later in this chapter, significant changes have occurred in the financial services industry within the past two decades. As a result, firms that were in traditionally separate sectors are more directly competing with one another; providing similar products; and, hence, facing similar risks in their activities. Capital is most generally defined as the long-term source of funding for a firm that earns a return for investors (debt and equity) and cushions the firm against losses. Such funding is contributed largely by (1) equity stockholders in anticipation of profits and (2) the firm’s own returns in the form of retained earnings. In some instances, long-term debt is also considered capital. Losses cushioned by capital arise from risks that firms face in their business activities. In our work, we found no definitive list of risk categories applicable to all firms covered in our review. For example, the Federal Reserve uses a list of six risk categories, and OCC delineates nine. A group of leading individuals from firms and regulators developed what they termed Generally Accepted Risk Principles (GARP), which lists six risk categories. Most of the financial firms we spoke with told us they use four categories of risk; some said they use as few as three. The listings of risks we reviewed covered much the same causes of possible loss, but they varied in how risks were grouped and in the nomenclature used. This report generally focuses on the following six categories, because regulators and the representatives of financial firms we interviewed identified them as the risks of greatest concern. Credit risk is the potential for financial loss resulting from the failure of a borrower or counterparty to perform on an obligation. Credit risk may arise from either an inability or unwillingness to perform as required by a loan, a bond, an interest rate swap, or any other financial contract. All financial firms face credit risk. For example, banks face credit risks in loans and bonds, insurance companies face credit risks in corporate and municipal bonds, and securities broker-dealers and FCMs face credit risks if other firms that they deal with do not meet their contractual obligations. Market risk is the potential for financial losses due to the increase or decrease in the value or price of an asset resulting from broad movements in prices, such as interest rates, commodity prices, stock prices, or the relative value of currencies (foreign exchange). Because all financial firms hold assets, all financial firms face market risks. However, they may not all face all types of market risks. Liquidity Risk is the potential for financial losses due to the inability of a firm to meet its obligations on time because of an inability to liquidate assets or obtain adequate funding, such as might occur if most depositors or other creditors were to withdraw their funds from a firm. This is referred to as “funding liquidity risk.” Liquidity risk also refers to the potential that a firm cannot easily reverse negative financial positions or offset specific exposures without significantly lowering market prices because of inadequate market depth or market disruptions (“market liquidity risk”). Financial firms face liquidity risk inasmuch as the loss of revenues due to interruptions of cash inflows affects a firm’s ability to cover its liabilities as they come due. Operational Risk is the potential for unexpected financial losses due to inadequate information systems, operational problems, breaches in internal controls, or fraud. Operational risk is associated with problems of accurately processing or settling transactions and taking or making deliveries on trades in exchange for cash, and with breakdowns in controls and risk limits. Individual operating problems are considered small-probability but potentially high-cost events for well-run firms. Operational risk includes many risks that are not easily quantified but control of which is crucial to the firm’s successful operation. Operational risk can be addressed through prudent management oversight of firm operations, including the establishment of internal controls. All firms face some type of operational risk. Business/event risk is the potential for financial losses due to events not covered above, such as credit rating downgrades (which affect a firm’s access to funding); breaches of law or regulation (which may result in heavy penalties or other costs); or factors beyond the control of the firm, such as major shocks in the firm’s markets. Included in business/event risk is a shift in legal status or changes in regulations. All types of financial firms face business/event risk. Insurance/actuarial risk is the risk of financial losses that an insurance underwriter takes on in exchange for premiums, such as the risk of premature death. Although this risk is most commonly associated with insurance companies, it can exist in other firms. For example, banks are authorized to underwrite credit life insurance, which is subject to actuarial risk. These risks can be discussed on a risk-by-risk basis, but the potential effect on a firm’s overall financial condition or risk profile cannot be obtained by summing the risks in each category, because risks interact in various ways. That is, the net potential loss from a combination of risks could be greater or less than the sum of potential losses from each individual risk, depending upon the economic relationship among the risks involved. The economic relationship among a firm’s risks depends on the correlation among prices of assets—that is, how the prices move in relation to one another—and the business strategies and holdings of the firm. Because the traditional activities of banks, securities broker-dealers, FCMs, and life insurers differed, each of these types of financial firms once tended to have a correspondingly distinct type of risk profile. The predominant risk for banks was credit risk, for securities broker-dealers and FCMs it was market risk, and for life insurance companies it was insurance/actuarial risk. However, for a variety of reasons discussed later in this chapter, the activities and risks of large, diversified financial firms in the highly competitive financial services industry are becoming increasingly similar. The scope of authority and oversight practices of financial regulatory agencies vary in a number of ways. The activities of banks, bank holding companies, securities broker-dealers, FCMs, and life insurance companies are regulated and overseen by a number of different types of agencies and organizations. Bank holding companies are regulated by the Board of Governors of the Federal Reserve System (Federal Reserve Board), and banks are regulated on an individual institution basis by various federal and state agencies. Securities broker-dealers and FCMs are regulated by SEC and CFTC, respectively. State agencies and self-regulatory organizations (SRO) are also involved in supervising broker-dealers and FCMs. Life insurance companies are regulated and overseen by state regulatory agencies. No formal/statutory holding company regulatory oversight currently exists for securities firms, futures firms, or insurance companies in the United States at the federal level. Many of the largest financial legal entities are part of a holding company structure that generally has affiliates conducting business activities in the formerly more separate sectors of banking, life insurance, securities trading, and futures trading sectors. In this report, we often refer to these holding company structures as large, diversified firms. The dominant form of banking structure in the United States is the holding company. A number of the larger bank holding companies have established nonbank subsidiaries that engage in securities underwriting and brokerage services, insurance sales, and futures trading, as well as other nonbanking activities permitted because they are deemed to be closely related to the business of banking and to produce a public benefit. Figure 1.1 is a simplified illustration of a hypothetical holding company with wholly owned banking and nonbanking subsidiaries and the regulators that oversee the various entities. Many large U.S. securities broker-dealers, life insurers, and FCMs have also expanded their range of activities by establishing holding companies at the top of their corporate structures. When creating or acquiring affiliates, these other types of financial firms are not limited to creating or acquiring those that engage in activities related to their own. Banks are allowed to affiliate only with companies engaging in activities closely related to banking and must demonstrate some public benefit in creating or acquiring an affiliate, but other types of financial firms have no such limitations. Figure 1.2 shows a simplified structure of a hypothetical nonbank financial holding company with affiliates engaged in banking activities (through a thrift institution), securities and futures trading, and life insurance sales, among many other types of activities. As summarized in table 1.1, the regulatory and oversight authorities of financial regulatory agencies differ. The Bank Holding Company Act of 1956 authorized the Federal Reserve Board to regulate bank holding companies on a consolidated basis. This gives the Federal Reserve Board regulatory and examination authority over all activities of the bank holding company. Affiliates that are banks are supervised by one or more of the federal banking agencies listed in table 1.1. Among other things, this means that capital standards apply at the holding company level and bank level. In addition, FDIC insures bank depositors and has authority to terminate deposit insurance for any FDIC-insured institution. In contrast to the regulatory authority of the Federal Reserve Board, SEC and CFTC are authorized to regulate only those entities that themselves engage in activities involving securities and futures, respectively, and not the affiliates of those entities. Unlike banks, Congress has not passed legislation authorizing SEC or CFTC to supervise holding companies of securities broker-dealers or FCMs, respectively. However, SEC and CFTC risk assessment rules promulgated pursuant to the Market Reform Act of 1990 and The Futures Trading Practices Act of 1992, respectively, enable those agencies to collect from the regulated entity information about the activities and financial condition of its affiliates and parent firms to assess the risks they pose to the regulated entity’s financial and operational condition, including net capital, liquidity, and the ability to finance operations. These rules do not provide either agency with the legal regulatory authority to examine or set regulatory capital requirements over the parent or affiliates of the SEC-registered broker-dealer or the CFTC-registered FCM, although they do give both agencies a supervisory role with respect to those affiliates. State insurance departments are authorized to regulate insurance activities and those firms that sell insurance products. They are not authorized to regulate or examine parents or affiliates of the regulated entities. Through capital standards and other regulations, regulators of banks, securities broker-dealers, FCMs, and life insurers seek to help ensure public confidence in financial institutions and markets by protecting customers’ funds and limiting losses to various deposit and guarantee funds that further protect customers’ funds. As for securities broker-dealers and FCMs, regulators seek to ensure that registered entities will have a pool of liquid assets available on a daily basis to meet their obligations to customers and other market participants. Capital regulation—requirements that firms hold minimum amounts of capital—is one tool in a kit of many that financial regulators use to help ensure stability and public confidence in the financial system and markets. It is supported by supervision—the monitoring, inspecting, and examining of regulated entities—and enforcement. In some cases, it is also supported by segregation of customer funds or by insurance protection of those funds. The oversight activities of financial regulators are similar in some respects and different in others. Each regulator is to promulgate rules (including regulatory capital requirements), monitor firms’ financial condition, perform examinations, and take appropriate actions to enforce relevant regulations and statutes. The oversight activities of SEC and CFTC differ most significantly from those of bank regulators and state insurance regulators because of differing purposes of the regulation. SEC and CFTC, with the assistance of SROs, protect investors and ensure the integrity of the securities and futures markets; bank regulators and state insurance regulators ensure the safety and soundness of entities they regulate. Supervision of regulated entities in the banking, securities, futures, and life insurance sectors includes off-site monitoring of financial reports and on-site examination visits. In banking, supervisors are to track the financial condition of their banks on a continuing basis and between on-site examinations. A principal off-site technique banking supervisors use for monitoring the activities and financial condition of their banks is the review of detailed financial statements (Call Reports) that the banks submit quarterly. In addition, the banking regulators use computerized monitoring systems that use Call Report data to compute, for example, financial ratios, growth trends, and peer group comparisons. Banking supervisors also meet with bank senior management from time to time to discuss the current condition of the bank and plans the bank has for the future. Monitoring is a complement to on-site examinations, which lie at the heart of the supervisory process. The purpose of bank on-site examinations is for examiners to evaluate the bank’s overall risk exposure with particular emphasis on what is known as its CAMELS—the adequacy of its capital, and asset quality, the quality of its management and internal control procedures, the strength of its earnings, the adequacy of its liquidity, and its sensitivity to market risk. Banks are usually examined at least once during each 12-month period and more frequently if they have serious problems. In addition, well-capitalized banks with total assets of less than $250 million can be examined on an 18-month cycle. In contrast to regulation of banks, regulation of the securities and futures markets is a combination of direct regulation and oversight by federal agencies and indirect regulation and oversight by SROs (e.g., the New York Stock Exchange, the National Association of Securities Dealers). Securities broker-dealers and FCMs are required to become members of an SRO and, as SRO members, must comply with SRO rules and regulations. SRO rules and regulations are promulgated under the SEC or CFTC standards and requirements. Securities SRO rules and regulations are often more stringent than SEC rules and require SEC’s approval. SROs must register with SEC or CFTC and are subject to SEC or CFTC oversight. SROs establish rules to govern member conduct and trading, set qualifications for certain market participants, monitor daily trading activity, examine their members’ financial health and compliance with rules, and investigate alleged violations of those rules and securities and futures laws. SEC oversees the regulatory and supervisory activities of the securities industry’s SROs. CFTC oversees the compliance activities of the futures industry’s SROs, which include the U.S. commodity exchanges and the National Futures Association. Both SEC and CFTC also develop, implement, interpret, and enforce statutes and regulations to protect customer funds, prevent trading and sales practice abuses, and ensure the financial integrity of firms holding customer funds. Additionally, SEC and CFTC conduct direct audits of clearing organizations and firms handling customer money to ensure compliance with the capital and segregation rules. In contrast to banking, securities, and futures regulation, regulation of the insurance industry is primarily a state, not federal, responsibility. In general, state legislatures set the rules under which insurance companies are to operate, including capital standards; and state insurance regulators are to monitor the health and solvency of the regulated insurance companies. To help coordinate their activities, state insurance regulators have established a central structure—the National Association of Insurance Commissioners (NAIC), an organization whose members are the heads of the insurance departments of 50 states, the District of Columbia, and 4 U.S. territories and possessions. NAIC’s basic purpose is to encourage consistency and cooperation among the various states and territories as they individually regulate the insurance industry. To that end, NAIC promulgates model insurance laws and regulations for state consideration and provides a framework for multistate examinations of insurance companies. State regulators use a number of basic methods to assess the financial strength of insurance companies, including reviewing and analyzing annual financial statements, doing periodic on-site financial examinations, and monitoring key financial ratios. Supervision of life insurers is the responsibility of insurance departments in each state, with the primary responsibility residing with the “domiciliary” regulator, that is, the regulator in the state where the company is domiciled. The domiciliary regulator is responsible for conducting periodic on-site examinations and for reviewing the required annual and quarterly financial reports. Examiners monitor the financial health of the insurer, along with compliance with rules and regulations, and look for evidence of any unsafe business practices. Regulators in states where the company is licensed and operating, other than the domiciliary state, may participate in on-site examinations with the domiciliary state if they choose. These examinations are called zone examinations. In most states, the typical interval between on-site examinations is 3 to 5 years unless regulators have reason to believe problems exist that could affect the company’s viability. Financial regulators may take both informal supervisory and/or formal enforcement actions to ensure that regulated entities undertake corrective steps for identified problems. In banking, such informal actions may include a request that a bank adopt a board resolution or agree to the provisions of a memorandum of understanding to address the problems. If necessary, financial regulators may take formal enforcement actions to compel the management and directors of troubled entities to address problems. Formal enforcement actions in banking include written agreements, cease and desist orders, prompt corrective action directives, termination of deposit insurance, revocation of a bank charter, and closing of the bank. Other actions include assessing fines, such as civil money penalties; and removing an officer or director from office and permanently barring him or her from the banking industry. SEC and CFTC have the authority to take supervisory and enforcement actions against the entities they regulate. Their enforcement tools include court injunctions; temporary restraining orders; and various administrative proceedings and sanctions, such as assessment of civil monetary penalties, disgorgement orders, censure, suspension and revocation of registration, and cease and desist orders. Additionally, SEC staff provide informal regulation of broker-dealers through no-action letters. In the no-action process, a broker-dealer requests interpretive relief from SEC staff regarding certain transactions or activities. In a typical no-action letter, the staff states that it will not recommend that SEC take enforcement action if the requesting party executes transactions or engages in activities in the limited context stated by the staff. In SEC’s view, limitations in no-action letters related to risk-management issues balance regulatory flexibility with the need to avoid undue risk. The letters are made available to the public and informally address regulatory concerns that by necessity are not detailed in securities statutes. As with other financial regulators, insurance regulators have an array of informal and formal actions that can be employed to correct problems identified through the supervisory process. These actions often begin with informal discussions of regulatory concerns with company officials. If problems are not resolved promptly, regulators have a number of more formal tools available, including administrative actions; court orders and injunctions; and culminating with the power to take regulatory control of a company, remove the officers, and either sell or liquidate it. Many of the authorities held by state insurance regulators are enhanced when the Risk-Based Capital Insurers Model Act has been adopted in a particular state. When adopted, this act gives the state’s chief insurance regulator the explicit authority to take regulatory action based on an insurer’s risk-based capital level. Since the late 1970s, significant changes have been occurring in the financial services industry due to a number of market shocks, combined with advances in financial theory and information technology. The interaction of these factors has led to significant expansion of such financial products as derivatives and asset-backed securities, improved methods to measure and manage risks, increased competition in financial services, and mergers of financial firms within and across financial sectors. In addition, these factors have encouraged some firms to offer risk management services to other financial and nonfinancial firms. This risk management has often been based on the use of derivatives and asset-backed securities to repackage risks and returns. The creation and growth in derivatives, huge increases in trading activities, and the development of new secondary markets, along with the creation of asset-backed securities, have fundamentally changed the financial landscape. Derivatives and asset-backed securities have permitted financial market participants to better manage market risk by transferring the risk from entities less willing to bear it to those more willing to do so. Derivatives have stimulated trading generally because they gave financial market participants a lower cost way to hedge investments or to take speculative positions. In addition, derivatives products markets have grown rapidly. For example, the International Swaps and Derivatives Association estimates that as of December 31, 1996, the combined notional amount of globally outstanding interest rate swaps and other over-the-counter (OTC) derivatives had grown to $25.45 trillion from $3.45 trillion on December 31, 1990. Advances in information technology and financial theory have helped reduce various barriers to competition. The increased speed and lower costs in communicating and transmitting data over large geographical distances eliminated such distance as an obstacle to competition. Moreover, new financial theories and faster computers helped financial firms handle large amounts of data at low cost and analyze the risks and returns created by new financial products. Swaps and other derivatives, which have been growing rapidly, are an example of such technology- and theory-dependent products. Since the tools and skills underlying them were not unique to any one sector of the financial services industry, no one sector has a monopoly on their use; thus, the list of major derivatives dealers includes banks, securities firms, and insurance companies. Regulators also have acted in ways to promote greater competition in the financial services industry. For example, the Federal Reserve Board has approved a number of additional activities for banks to offer, including providing investment advice, underwriting insurance related to the extension of credit, tax planning and preparation, data processing, and operating a credit bureau or collection agency. The Federal Reserve Board also approved bond and stock underwriting powers for Section 20 subsidiaries of bank holding companies. Effective in March 1997, the Federal Reserve Board enhanced these powers when it increased from 10 to 25 percent the share of total revenues a bank holding company’s Section 20 subsidiary may derive from corporate equity and debt underwriting. On the basis of these decisions, banks have increasingly acquired or created securities broker-dealer affiliates or subsidiaries. OCC has amended its regulations to permit subsidiaries of national banks to engage in activities that OCC determines—on a case-by-case application basis—to be “part of or incidental to the business of banking.” In addition to banks entering underwriting, an area associated with securities firms, a number of large securities firms have entered a traditional province of banks: commercial loans to corporate borrowers. Recently, securities firms have made and traded such loans, which are commonly linked with securities underwriting. Such services enable the firm to provide a customer with a full range of its financing needs. In a number of instances, banks and securities firms have joined together to provide such loan and security facilities for customers. Increasing competition also affects insurance companies and insurance products. During the past several years, life insurance companies increasingly have moved away from traditional whole life and term insurance products and have focused instead on asset growth or investment products such as variable annuities. These products compete with stocks and bonds, retirement vehicles offered by banks, and stock mutual funds and are often sold by financial planners and securities brokers. As part of this competition, large, diversified financial firms are increasingly operating in what once were separate banking, insurance, and securities sectors, as discussed earlier. Banks have acquired investment banks; and many types of firms have acquired thrifts, which are similar to banks but can be owned by anyone. For example, a number of insurance companies have applied for thrift licenses. Securities firms have acquired firms that have enabled them to engage in banking activities. For example, in 1997, Merrill Lynch & Company, Inc., and the Travelers Group, Inc., which includes insurance companies and securities firms, both received federal thrift charters. In addition, insurance companies have acquired securities firms. For example, the Travelers Group acquired Salomon Brothers, Inc. (primarily a securities trading firm) in November 1997, and it already owned Smith Barney and Company (primarily a retail brokerage firm). In addition, in April 1998, the Travelers Group and Citicorp announced their intention to merge and create a new entity that is to be called Citigroup. This would be the biggest corporate merger in history; however, there are questions about the implications of current banking laws for the merger. If the laws are not changed, it is possible the new entity would have to divest itself of certain operations, either in insurance or banking. To help Congress and others better understand current regulatory capital requirements, developments in those requirements, and regulatory issues these developments raise, the objectives of this report are to describe, for the banking, securities, futures, and life insurance sectors of the financial services industry, (1) regulatory views of the purpose of capital and current regulatory requirements; (2) the approaches of some large, diversified financial firms to risk measurement and capital allocation; and (3) issues in capital regulation and initiatives being considered for changes to regulatory capital requirements. To achieve these objectives, we interviewed officials from financial regulators, including OCC, the Federal Reserve Board, the Federal Reserve Bank of New York, FDIC, SEC, CFTC, and the Office of Thrift Supervision; and the Departments of Insurance for New York and Illinois; academics and consultants who are considered experts in the financial rating agencies’ analysts, including A.M. Best, Standard and Poor’s, and Moody’s Investors Service; officials of SROs, including the Chicago Board of Trade, the Chicago Board of Trade Clearing Corporation, the Chicago Mercantile Exchange, the National Futures Association, the New York Stock Exchange, and the National Association of Securities Dealers; officials of trade and industry associations, including the American Academy of Actuaries, the American Bankers Association, the American Council on Life Insurance, the Independent Bankers Association of America, the Institute of International Finance, NAIC, the New York Clearing House Association, and the Securities Industry Association; and officials of 16 large, diversified firms in the commercial banking, securities, futures, and insurance industries (see app. V for a listing of these firms). In addition, we reviewed U.S. government, international organization, trade association, academic, industry, and private firm documents, including regulations, annual and other published reports, papers and articles, industry journals, and information available at various sites on the world wide web. To determine the development of risk measurement and capital allocation systems in firms, we interviewed and obtained information from a number of large, diversified firms in the commercial banking, securities, futures, and insurance sectors (see app. V for a listing of these firms). We did not test the adequacy of any of the risk measurement and capital allocation systems discussed in this report. In selecting firms for this review, on recommendations from SEC, we chose securities firms that were part of the Derivatives Policy Group (DPG). We chose commercial banks that appeared likely to be required to meet the market risk capital requirements that took effect on January 1, 1998, and life insurance companies that have been involved in the development of risk-based capital standards for that industry. The securities firms we visited are large holding companies and include both SEC-registered broker-dealers and CFTC-registered FCMs. We interviewed officials who could speak about risk management and capital allocation systems for the consolidated financial firm. We developed and used a set of common questions in our discussions with these firms. In these interviews, we obtained information about the following: the most important risks faced by these firms, their risk measurement and capital allocation systems and methodologies, their internal risk management structures and uses of internal risk measurement information, the impact of current capital requirements on their operations, and possible future directions in capital regulation. We did our work in Washington, D.C.; New York; and Chicago between November 1996 and April 1998 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from OCC, the Federal Reserve Board, FDIC, SEC, and CFTC. These comments are reprinted in appendixes VI, VII, VIII, IX, and X. The agencies generally believed the report was comprehensive and balanced. In their comments, OCC, FDIC, and SEC expand on a number of points made in the report pertaining to their industries. On June 2, 1998, the Washington Counsel of NAIC provided us with oral comments in which he characterized the report as reasonable. These organizations also provided technical comments, which have been incorporated where appropriate. Just as the financial regulators serve differing statutory purposes, they differ in their views on the purpose of regulatory capital. Bank capital standards are focused on maintaining the safety and soundness of banks, and capital is calculated on a going-concern basis. Capital standards for securities broker-dealers and FCMs are focused on protecting customers in the event of a broker-dealer or FCM failure and are calculated on a liquidation basis. Capital standards for life insurers are to help limit failures and protect claimants, and capital is calculated on a going-concern basis. In addition to reflecting differences in the regulators’ views on the purpose of capital, regulatory capital requirements also reflect differences in what have been historically the dominant risks associated with the regulated entities. The bank capital requirements that apply to all banks have emphasized credit risk, because credit risk has long been the most important and predominant risk for banks, which traditionally invested the largest part of their funds in bank loans. Recently, regulators added a market risk capital requirement for banks engaged in trading activities that create market risks. Capital requirements for securities broker-dealers and FCMs traditionally focused on liquidity and market risks and the effect of changing market prices on the value of their assets, in keeping with the dominant risk in their activities. Capital requirements for life insurers focus on traditional risks, such as actuarial risk which is unique to the insurance industry, as well as other risks related to their assets and liabilities. Current capital requirements reflect a variety of efforts to relate capital requirements to risks inherent in firms’ activities. These include efforts to modify current rules to better reflect actual risks in firm activities as well as efforts to take advantage of new risk measurement techniques that are more sensitive to correlation among prices of assets and can more precisely measure risks. Industry representatives with whom we spoke, who generally favor changing regulatory capital requirements to more precisely account for risks in their activities, see progress in recent changes made to regulatory capital requirements. However, they also have concerns and see needs for additional improvement. Although the agencies that oversee banks, securities broker-dealers, FCMs, and life insurance companies all seek to protect customers and ensure the smooth functioning of the markets they regulate, their statutory purposes differ in various ways. The differences in regulatory purpose are reflected in the regulators’ views of the purpose of regulatory capital. As shown in table 2.1, the regulatory purpose of agencies that oversee banks is to help ensure the safety and soundness of the banking and payments systems and minimize losses to the deposit insurance fund; and the Federal Reserve Board also has responsibility to help ensure the stability of the U.S. financial system. In this regard, regulators view capital as performing several important functions. It is there to absorb losses, thereby allowing banks to continue to operate as going concerns during periods when operating losses or other adverse financial results are being experienced. Capital also helps to promote public confidence, restrict excessive asset growth, and provide protection to depositors and the Bank Insurance Fund administered by FDIC. Depositors who are protected by deposit insurance may be less careful in their choice of banks. This behavior may, in turn, permit insured banks to operate less conservatively than they would without deposit insurance to shield them from depositors’ concerns about the banks’ safety and soundness. The consequences of both the banks’ and depositors’ behavior is called “moral hazard.” Regulators use capital requirements to mitigate the moral hazard that arises from deposit insurance protection. In addition, bank regulatory capital requirements are a measure regulators can use as a starting point in regularly assessing the financial condition of banks. A reduction in capital that causes the institution to approach the minimum required ratio is seen as a symptom warning regulators that an institution’s financial health is threatened and that regulatory intervention may be needed to protect depositors and other parties. Under the Prompt Corrective Action guidelines enacted as part of the Federal Deposit Insurance Corporation Improvement Act (FDICIA) of 1991, banking supervisors are required to increase intervention as a bank’s capital ratio falls through various predetermined ratios before the bank runs out of capital. This intervention is meant to reduce the likelihood of bank failures, reduce the cost of failures that occur, and thus deter or minimize systemic risk. Minimum capital requirements also help protect the Bank Insurance Fund, which guarantees depositors will receive par value up to $100,000 per depositor per insured institution if regulators close a bank and FDIC must liquidate it. For deposits exceeding the $100,000 limit, FDIC is to provide reimbursements based on the value of the assets sold when the bank is closed and liquidated. Bank compliance with capital requirements protects the Bank Insurance Fund because higher capital requirements reduce the likelihood of bank failure and thus reduce the losses that FDIC is likely to incur in covering guaranteed deposits from failed banks. FDICIA imposed a requirement that a bank whose tangible equity falls to 2 percent (or less) of assets is deemed to be “critically undercapitalized” and generally is to be placed in conservatorship or receivership within 90 days of becoming critically undercapitalized. Although bank regulation, including capital standards, attempts to reduce the likelihood of failures, it is not meant to forestall all failures. Bank capital standards are focused on safety and soundness, and regulatory capital is calculated on a going-concern basis—that is, with the assumption that the bank will continue operating. In this way, bank capital regulation is focused on the continued operation of the banking system and is meant to ensure that payment services and the provision of loans to all customers, both large and small, will not be disrupted. As shown in table 2.1, the primary regulatory purposes of the SEC and CFTC capital standards are to ensure that broker-dealers and FCMs will have a pool of liquid assets available on a daily basis to meet their obligations to customers and other market participants. This protection of customers does not shield customers from investment losses if the market value of the investment is less than the purchase price, and the protection is consistent with SEC’s and CFTC’s overall concern with ensuring the integrity of the securities and futures markets, respectively. These agencies’ regulatory capital requirements are designed to provide assurance that broker-dealers and FCMs can fulfill their obligations to customers and other market participants in the event a broker-dealer or FCM is closed. The amounts owed to customers are based on credit balances (or cash) in customer accounts and the market value of customers’ securities and futures positions at the broker-dealer or FCM. Minimum capital requirements also help protect the Securities Investor Protection Corporation (SIPC), a nonprofit membership corporation created by Congress under the Securities Investor Protection Act of 1970. Within certain limits, SIPC will return to customers cash and securities held at liquidated SIPC member broker-dealers. SIPC protects each customer up to $500,000 for claims for cash and securities, although claims for cash are limited to $100,000 per customer. The cash limit historically has tracked the bank-insured deposits amount. SIPC does not protect investors from declines in the market value of their securities. Successful functioning of the net capital rule results in the orderly liquidation of a failing firm; prevents the need for federal court intervention; and reduces strains on SIPC’s resources, including the SIPC membership assessment fund from which customers are paid. Generally speaking, state insurance regulators are to monitor the health and solvency of regulated life insurers in order to protect claimants. For state insurance regulators, the purposes of capital are similar to the purposes of capital for bank regulators. State insurance regulators impose capital requirements to try to limit life insurance company failures and thus help ensure the long-run viability of these insurance companies so that they can meet policyholders’ claims in the future. However, state regulators regulate only insurance companies and not the insurance groups or the often large, diversified financial firms that own insurance companies. Generally, however, insurance regulators do have the responsibility of approving mergers or acquisitions of insurance companies. Current regulatory capital requirements for the banking, securities, futures, and life insurance sectors vary in how they take into account the risks of regulated entities in determining minimum capital standards. The capital requirements differ, although the rules for securities broker-dealers and FCMs are similar. These differences reflect differing regulatory purposes, as discussed earlier, or differences in the types of activities and risks that are, or have been, dominant for the various types of regulated entities. To one degree or another, all of the regulators have adopted some form of “risk-based” capital regulation. However, due to differences in their purposes or in the historic risks faced by the regulated entities, the actual methods for assessing risks and determining capital levels continue to differ across regulators. Initial bank risk-based capital requirements primarily emphasized credit risk, reflecting the predominance of lending activities by banks. In 1988, regulators in the United States and other countries who were part of the Basle Committee on Banking Supervision agreed to the Basle Accord, an internationally developed capital standards framework for internationally active banks. The accord’s requirements were initiated in the United States in March 1990, with a 2-year phase-in period ending in full implementation in 1992. These requirements pertained primarily to credit risk; however, they were amended in 1996 to incorporate market risk requirements for specific types of assets that are often traded in internationally active banks. In addition to the risk-based requirements, U.S. banking regulators also have minimum leverage capital requirements. These leverage capital standards were established prior to—and have been retained even after the implementation of—the risk-based capital standards. Also, in 1991, FDICIA created a capital-based framework for bank oversight and enforcement based on the use of increasingly stringent forms of prompt corrective action as an institution’s leverage and risk-based capital ratios decline. (See app. I for a more detailed discussion of bank risk-based capital requirements.) The 1988 accord’s standards, which bank regulators and others describe as “risk-based,” require banks to hold capital to cushion against potential losses arising primarily from credit risk. Although the accord pertains to internationally active banks, U.S. banking regulators have required all U.S. banks and bank holding companies, since 1992, to hold capital equal to at least 8 percent of the total value of their on-balance sheet assets and off-balance sheet items, after adjusting this value by a measure of the relative risk (known as risk-weighting). According to regulatory guidelines on capital adequacy, the final supervisory judgment of a bank’s capital adequacy may differ from the conclusions that might be drawn solely from the risk-based capital ratio. This is because the ratio does not incorporate other factors that can affect a bank’s financial condition, such as interest rate exposure, liquidity risks, the quality of loans and investments, and management’s overall ability to monitor and control financial and operating risks. The guidelines establish minimum ratios of capital to risk-weighted assets; banks are generally expected to operate well above these minimum ratios. Banks are required to meet a total risk-based capital requirement equal to 8 percent of risk-weighted assets. At a minimum, a bank’s capital must consist of core capital, also called tier 1 capital, of at least 4 percent of risk-weighted assets. Core capital includes common stockholders’ equity, noncumulative perpetual preferred stock, and minority equity investments in consolidated subsidiaries. The remainder of a bank’s total capital can also consist of supplementary capital, known as tier 2 capital. This can include items such as general loan and lease loss allowances, cumulative preferred stock, certain hybrid (debt/equity) instruments, and subordinated debt with a maturity of 5 years or more. The regulation limits the amount of various items included in tier 1 and tier 2 capital. For example, the amount of supplementary (tier 2) capital that is recognized for purposes of the risk-based capital calculation cannot exceed 100 percent of tier 1 capital. These capital standards were developed because regulators in the United States and in other countries wanted to address more adequately the credit risks posed by certain bank activities. By working with various countries to develop an international standard, regulators also attempted to encourage banks to strengthen their capital positions while minimizing any competitive inequality that might arise if requirements differed across countries. According to the original 1987 consultative paper issued by the Basle Committee, the target ratio of 8 percent capital to risk-adjusted assets represented a higher level of capital than banks in various countries were generally holding at the time. Recognizing this, the 1988 Basle Accord allowed 4 years for banks to come into full compliance with the required amount. The risk-weights for credit risk attempt to account for the relative riskiness of a transaction on the basis of its broad characteristics, such as a type of obligor (e.g., government vs. bank vs. a private sector borrower) and whether the transaction is on- or off-balance sheet. Assets with a relatively low likelihood of default are assigned lower risk-weights than assets thought to have a higher likelihood of default. Although the amount at risk is often associated with changing asset prices, the credit risk calculation does not use market price information to evaluate risks, except in the case of derivatives contracts. Because bank loans, which dominate credit risks, generally are not traded, market price information cannot be regularly observed and thus used to evaluate risk. Instead, the risk-weights for credit risk are broad categories arrived at through consensus among members of the Basle Committee. Under the credit risk rules, the adjustments of asset values to account for the relative riskiness of a counterparty involve multiplying the asset values by certain risk weights, which are percentages ranging from 0 to 100 percent. A zero risk-weight reflects little or no credit risk. For example, if a bank holds a claim on the U.S. Treasury, a Federal Reserve Bank, or the central government or central bank of another qualifyingOrganization for Economic Cooperation and Development (OECD) country, this asset is multiplied by a factor of 0 percent, which results in no capital being required against the credit risk from this transaction. For an obligation owed by another commercial bank in an OECD country, a bank must multiply the amount of this obligation by 20 percent, which has the effect of requiring the bank to hold capital equal to 1.6 percent of the value of the claim on the other bank. Loans fully secured by a mortgage on a 1-4 family residential property carry a risk weight of 50 percent, thus requiring the bank to hold capital equal to 4 percent of the value of the mortgage. For an unsecured obligation owed by a private corporation or individual, such as a loan without collateral, a bank must multiply the amount of the unsecured obligation by 100 percent, which requires the bank to hold capital equal to a full 8 percent of the value of the unsecured obligation. The U.S. regulations place all credit risks into one of four broad categories and treat each product in a given category as if it carries equal levels of credit risk—that is, the capital requirement for each asset in the category is based on the same percentage risk-weight. Although these risk-weightings are based primarily on the type of obligor, qualifying collateral (such as cash and government securities) and qualifying guarantees (including bank and government guarantees) are also recognized. To adjust for credit risks created by financial positions not reported on the balance sheet, the regulations provide conversion factors to express off-balance sheet items as an equivalent on-balance sheet item, as well as rules for incorporating the credit risk of interest-rate, exchange-rate, and other off-balance sheet derivatives. These positions are converted into a credit equivalent amount, and then the standard loan risk-weight for the type of customer is applied. The risk-weight is applied according to the type of obligor, except that in the case of derivatives the maximum risk-weight is 50 percent. In September 1996, U.S. bank regulators issued a final rule based on the Basle Committee’s January 1996 amendment to the Basle Accord designed to incorporate market risks into the risk-based capital standards. As applied by U.S. bank regulators, the purpose of the amendment was to ensure that banks with significant exposure to market risk maintain adequate capital to support that exposure. Because the market risk rule applies to assets that are commonly traded in public markets and marked to market, the risk calculations are based, in part, on measuring expected movements in prices and the risks in the current financial position of the institution. U.S. rules apply to any bank or bank holding company whose trading activity equals 10 percent or more of its total assets or whose trading activity equals $1 billion or more. In addition, a bank regulator can include an institution that does not meet the criteria if deemed necessary for safety and soundness purposes or can exclude institutions that meet the applicability criteria. At the end of 1996, 17 banks and 17 bank holding companies met these criteria. The new rules became mandatory January 1, 1998, but banks could have begun implementing them as of January 1, 1997. The final market risk rule requires that institutions adjust their risk-based capital ratio to take into account both the general market and specific risk of all “covered positions” both on- and off-balance sheet. The rule does not cover all market risks faced by banks. For example, interest rate risk on nontrading assets such as commercial loans and mortgages is not included. The rule requires that banks use their own internal models to measure their daily “value-at-risk” (VAR) for covered positions. VAR reflects changes in prices; price volatility or variability; and correlation among the prices of financial assets (that is, the extent to which asset prices move together). A bank’s internal model may use any generally accepted VAR measurement technique, but the regulation requires the level of sophistication and accuracy of the model to be commensurate with the nature and size of the bank’s covered positions. To adapt banks’ internal models for regulatory purposes, bank regulators developed minimum qualitative and quantitative requirements that all banks subject to the market risk standard are to use in calculating their VAR estimate for determining their risk-based capital ratio. The qualitative requirements reiterate the basic elements of sound risk management. For example, banks subject to the market risk capital requirements are required to have a risk control unit that reports directly to senior management and is independent of business trading units. According to the final rule, the quantitative requirements are designed to ensure that an institution has adequate levels of capital and that capital charges are sufficiently consistent across institutions with similar exposures. These requirements call for each bank to use common parameters when using its internal model for generating its estimate of VAR. These common parameters include, among others: daily calculation; an assumed holding period of 10 days; a 99 percent confidence level; the use of empirically verified correlation between risk types; and the use of at least 1 year of historical data, with the data updated at least once every 3 months. The total market risk charge is the sum of the general market and specific risk charges. The market risk charge starts from the estimate of VAR. Because the VAR models may not capture unusual market events, the general market risk charge is then the higher of the previous day’s VAR, or the average daily VAR over the last 60 business days multiplied by at least 3. The specific risk charge can be determined by a bank’s internal model if the model is approved by the regulator, or by calculations specified in the regulation if the model is not approved. The charge for specific risk is added to the general market risk amount to obtain the total market risk capital charge. For banks subject to the market risk charge, the market risk regulation includes an additional tier of qualifying capital—tier 3. Tier 3 capital is unsecured subordinated debt that is fully paid up, has an original maturity of at least 2 years, and is redeemable before maturity only with approval by the regulator. The final rule also requires banks to conduct periodic backtesting beginning in January 1999. More specifically, banks will be required to compare daily VAR estimates generated by internal models against actual daily trading results to determine how effectively the VAR measure identified the boundaries of losses, consistent with the predetermined statistical confidence level. The regulation will require bank regulators to use the backtesting results to adjust the multiplication factor (multiplier) used to determine the bank capital requirement. In addition to the risk-based capital requirements, U.S. banks are subject to a minimum leverage ratio, which is a requirement that tier 1 capital be equal to a certain percentage of total assets, regardless of the type or riskiness of the assets. Leverage ratios have been part of bank regulatory requirements since the 1980s. They were continued after the introduction of risk-based capital requirements, as a cushion against risks not explicitly covered in the risk-based capital requirements, such as operational weaknesses in internal policies, systems, and controls. According to FDIC, leverage standards also help to restrict excessive asset growth and minimize potential moral hazards by ensuring that any asset growth is funded by a commensurate amount of owners’ equity. Since the early 1990s, banks have been specifically required to hold tier 1 capital equalling between 3 and 5 percent of their total assets, depending on a regulatory assessment of the strength of their management and controls. The amount of capital held by a bank is not to be less than this leverage ratio. However, if the risk-based capital calculation yields a higher capital requirement, the higher amount is the minimum level required. In 1997, the risk-based capital ratios for the six large banks we spoke with all exceeded the minimum 8 percent total requirement, as shown in table 2.2. In addition, the ratios for tier 1 capital, which is considered the strongest form of capital, exceeded the 4 percent minimum requirement at all of the banks. According to regulatory officials, the risk-based capital ratios of almost all U.S. banks exceed the minimum required levels. According to FDIC, fewer than 10 percent of U.S. banks actually report risk-based capital figures by completing the Call Report Risk-Based Capital forms. When calculating their capital ratios, banks are permitted to perform a simple test that, once passed, negates the need to do the more complicated calculations. Over 90 percent of banks pass this de minimis test, and an algorithm approximates their risk-based capital level. Bank regulators told us they believe prompt corrective action has been influential in keeping bank capital levels up. In addition, several years of record-breaking earnings have facilitated financial firms’ capital accumulation. As discussed earlier, regulators of securities broker-dealers and FCMs seek to protect customers of the firms they oversee as well as to protect the integrity of their markets. The regulatory foundation of customer protection efforts includes capital requirements in the form of net capital rules and customer protection and funds segregation rules, which are designed to protect the regulated entity’s customers and thereby other market participants from monetary losses and delays that can occur when the regulated entity fails. The objective of protecting investors does not extend to the protection of the going concern of broker-dealers or FCMs, nor does it extend to the protection of investors’ holdings against market losses. These rules, respectively, require SEC-registered broker-dealers and CFTC-registered FCMs—the regulated entities—to continually maintain sufficient liquid assets to protect the interest of customers and other market participants if the firm ceases doing business, and as applicable, to keep customer assets segregated from the regulated entity’s assets. The rules focus specifically on the regulated entity’s financial condition and activities. As noted above, SEC and CFTC do not have statutory authority to regulate holding companies of broker-dealers or FCMs. The financial condition of holding companies or other affiliates of the regulated entity are generally not included in computation of net capital or compliance with the customer segregation rule. SEC and CFTC calculate broker-dealer and FCM liquid capital, respectively, in a similar manner. However, their capital requirements, which are based on either ratios of capital to assets or capital to liabilities of the firm, are calculated differently. Capital standards for brokers and dealers based upon liquidity have been in effect since 1934 when the Securities Exchange Act was adopted. According to SEC, it adopted the SEC Uniform Net Capital Rule in 1975 in response to congressional concerns arising from the unprecedented financial and operational crisis in the securities industry from 1967 to 1970. It is a conservative liquidity-based capital standard that requires broker-dealers to maintain a minimum level of liquid capital sufficient to promptly satisfy all of its obligations to customers and other market participants, and to provide a cushion of liquid assets to cover potential market, credit, and other risks. The rule focuses generally on the registered broker-dealer; therefore, the assets and liabilities of a related entity (e.g., an affiliate or parent) of the broker-dealer are generally not taken into account in calculation of net capital. Net Capital Requirements: With certain exceptions, the net capital rule requires a registered broker-dealer to maintain the greater of an absolute minimum dollar amount of net capital depending on the nature of the broker-dealer’s business, or a specified minimum ratio of net capital to either its liabilities or its customer-related receivables. Under the SEC regulations, a broker-dealer must satisfy a minimum net capital ratio based either on a calculated ratio of capital to indebtedness (liabilities) or capital to customer-related receivables. Under the basic (or aggregate indebtedness) method, the capital a broker-dealer is required to maintain must be the greater of $250,000 or 6-2/3 percent of aggregate indebtedness (generally all the liabilities and/or obligations of the broker-dealer). The basic method is generally used by smaller broker-dealers. Under the alternative method, a broker-dealer is required to maintain capital equal to the greater of $250,000 or 2 percent of the total amount of customer-related receivables (money owed by customers and certain other market participants to the broker-dealer). If the broker-dealer is also registered as an FCM with CFTC under the Commodity Exchange Act (CEA) (i.e., dually-registered), it must maintain capital equal to the greater of SEC’s minimum requirements, as described above; or 4 percent of the customer funds (money owed to the customers by the FCM) that the broker-dealer is required to segregate pursuant to the act and regulations thereunder. The alternative method tends to be used by larger broker-dealers. The basic and alternative methods are intended to allow a firm to increase its customer business only to the extent that the firm’s net capital can support such an increase. Computing Net Capital: The process of computing a broker-dealer’s regulatory net capital involves separating its liquid and illiquid assets. Liquid assets are assets that can be converted easily into cash with relatively little loss of value. Assets that are considered illiquid are given no value when net capital is computed (a 100 percent capital charge). Only liquid assets count in the calculation of net capital, because a broker-dealer must have sufficient capital to close its business within a short time frame and have sufficient liquid assets to meet its liabilities, including those of customers. To begin computing net capital, U.S. Generally Accepted Accounting Principles (GAAP) equity must be determined by subtracting the broker-dealer’s GAAP liabilities from its GAAP assets. Certain subordinated liabilities are added back to GAAP equity because the net capital rule allows them to count toward capital, subject to certain conditions. Deductions are taken from GAAP equity for illiquid assets, such as the value of exchange seats and fixed assets. Unsecured receivables are also deducted from GAAP equity. The net capital rule further requires prescribed percentage deductions from GAAP equity, called “haircuts.” Haircuts provide a capital cushion to reflect an expectation about possible losses on proprietary securities and financial instruments held by a broker-dealer resulting from adverse events. The amount of the haircut on a position is a function of, among other things, the position’s market risk liquidity. A haircut is taken on a broker-dealer’s proprietary position because the proceeds received from selling assets during a liquidation depend on the liquidity and market risk of the assets. Less liquid assets and assets with greater price volatility are more likely to take longer to sell and to be sold at a loss. Thus, the less liquid the position, the greater the haircut on the position. Haircuts generally recognize limited correlation among prices that can affect the actual values received when assets are liquidated. The final figure, after all adjustments are made, is referred to as net (or liquid) capital. This figure is then compared to the minimum requirement to determine capital compliance. See appendix II for greater discussion of the SEC net capital rule. Liquid capital for FCMs is generally calculated in the same way that SEC calculates a broker-dealer’s liquid capital. That is, CFTC generally makes similar liquidity (illiquid assets deductions) and risk (haircuts for trading and investment positions) adjustments to GAAP net worth as does SEC in determining the amount of liquid capital. (See app. II for more detail on the calculation.) CFTC’s capital requirements, like SEC’s, are based on the firms’ business activities and apply only to the registered FCMs. However, unlike SEC’s, CFTC’s requirement is based on the amount of required segregated customer funds, (subject to certain adjustments), rather than aggregate indebtedness or customer-related receivables. The amount of required segregated funds is based primarily on margin requirements for the commodity contracts held by the FCM’s customers. Margin requirements are set by each exchange for each commodity contract traded on the exchange and represent the customers’ guarantee of performance. The amount of margin per commodity varies depending on the market value of the contract and volatility of the price of the underlying commodity. The amount of segregated funds on deposit is determined primarily by the Standard Portfolio Analysis of Risk (SPAN) margining system, a VAR based statistical model designed to evaluate the total risk in a portfolio of related futures and options positions. Therefore, the CFTC’s capital requirements are, in large measure, risk-based. In addition, the deductions or haircuts for proprietary positions in futures or commodity option positions are applied to the margin requirement calculated under SPAN. All funds held by FCMs but owed to customers are required to be segregated from the firm’s funds and treated as belonging to customers. Under CFTC’s net capital rule (Rule 1.17), FCMs must maintain adjusted net capital in an amount that is no less than the greater of (a) a prescribed minimum fixed-dollar amount of $250,000; (b) a variable minimum amount of 4 percent of customer funds required to be segregated, subject to certain adjustments; (c) the amount of adjusted net capital required by a registered futures association of which it is a member; or (d) if the FCM is also a registered broker-dealer, which is known as being “dually-registered,” the amount required under SEC’s net capital rule. Under CFTC’s capital rule, an FCM calculates adjusted net capital as the amount by which current assets (cash and other assets that are reasonably expected to be realized as cash in a year) exceed its adjusted liabilities (the FCM’s total liabilities minus certain subordinated liabilities) and various regulatory charges or adjustments—such as percentage reductions in the market value of certain proprietary positions and undermargined customer accounts. Adjusted net capital is intended to provide a cushion for market and credit risks and to give a firm with customer accounts time to transfer accounts and liquidate the accounts of the defaulting customers in an orderly manner. Some regulators and firm representatives told us that because a broker-dealer must cease conducting a securities business if its net capital falls below the minimum requirement, broker-dealers generally maintain capital greater than the minimum requirement (a.k.a. excess capital). As shown in table 2.3, the amount of excess net capital held by the five large securities firms in our study, which are all dually-registered as FCMs, ranged from $974 million to $1.845 billion. Some of the firm representatives we interviewed stated that one reason they held such large amounts of excess capital is that their counterparties required them to do so in order to be willing to conduct business with them. In addition to the minimum base requirements, the regulatory net capital rules and the rules of the various SROs establish early warning capital levels that exceed the minimum requirement. These capital triggers allow regulators and SROs to identify at early stages broker-dealers and FCMs that are experiencing financial difficulties and to take corrective actions to protect customers and the marketplace. Broker-dealers and FCMs are required to promptly notify their regulators when early warning violations occur. SROs are required to notify SEC and CFTC and place restrictions on the activities of regulated entities whose net capital falls to the early warning levels. For example, under the SEC net capital rule, a broker-dealer that uses the alternative method of calculating net capital may not withdraw equity capital in any form to pay shareholders if its net capital is less than 5 percent of its customer-related receivables. When an FCM’s adjusted net capital falls below its early warning level, which is generally 150 percent of the minimum net capital amount, it must promptly notify CFTC. In addition, CFTC requires FCMs to report to CFTC when a series of events, on a net basis, causes a 20 percent or greater reduction in their net capital. As soon as a broker-dealer’s or FCM’s net capital amount falls below the minimum net capital level, the firm must immediately cease conducting business and it must either demonstrate that it has come back into compliance with net capital requirements or liquidate its operations. Closing a broker-dealer or FCM before insolvency makes the firm a viable merger candidate because of its residual value and generally allows the regulated entity’s customers and other market participants to be fully compensated when the firm is liquidated. After a 2-year test period using the Options Clearing Corporation’s Theoretical Intermarket Margining System (TIMS), SEC amended its net capital rule in early 1997 to allow broker-dealers to use theoretical option pricing models (i.e., statistical models) to calculate required capital charges for exchange-traded (i.e., listed) equity, index, and currency options and their related hedged positions. At this time, the only approved vendor and options pricing model is the Options Clearing Corporation and its TIMS. According to SEC, this methodology will relate capital charges (haircuts) on these instruments more closely to the market risk inherent in these broker-dealer options positions. This methodology permits the risk calculations for listed options to reflect market prices, price volatility, and correlation among asset prices. According to the regulations, this methodology is a two-step process. In the first step, third-party source models and vendors approved by a designated examining authority (i.e., an SRO) are to be used to estimate the potential gain and loss on the individual portfolios of the broker-dealers. In the second step, such approved vendors are to provide, for a fee, a service by which the broker-dealer may download the results generated by the option pricing models to allow the broker-dealers to then compute the required haircut for their individual portfolios. (See app. II for greater discussion of the salient features of the methodology.) Adoption of this methodology is the first time SEC has formally permitted the use of statistical models, which reflect price volatility and correlation, for setting regulatory capital requirements. The effective date of the amendment was September 1, 1997. SEC, CFTC, and some SROs are exploring other possible approaches to more closely relate regulatory capital charges to the actual risks inherent in a firm’s operations. These initiatives are discussed in chapter 4. Both SEC and CFTC have rules that require the segregation of customer funds from firm funds. The SEC rule complements its net capital rule and is designed to prevent the misallocation or misuse of customer funds and securities. The CFTC rule also complements its net capital rule and provides for the safeguard of customer funds by requiring that they are segregated from the FCM’s own funds. The SEC customer protection rule attempts to prevent the misallocation or misuse of customer funds and customer securities by broker-dealers. The rule applies to carrying firms because they hold customer assets. The rule, working in conjunction with SEC’s net capital rule, is designed to protect the regulated entity’s customers from monetary losses and delays that can occur when the regulated entity fails. The customer protection rule has two parts: (1) possession or control of all customers’ fully paid and excess margin securities, and (2) special reserve bank account. The first part is to prevent broker-dealers from using customer securities to finance the firm’s proprietary activities, because all customers’ fully paid and excess margin securities must be in possession or control of the broker-dealer. The rule also requires the broker-dealer to maintain a system capable of tracking fully paid and excess margin securities daily. The broker-dealer is required to keep all customer fully-paid and excess margin securities segregated from the broker-dealer’s assets and maintained free of all claims or liens. The second part of the customer protection rule involves customer cash kept at broker-dealers. When customer cash—the amount the firm owes customers (credits)—exceeds the amount customers owe the firm (debits), the broker-dealer must keep the difference in a special reserve bank account. The broker-dealer is to calculate the amount of the difference weekly using the reserve formula specified in the rule. If debits exceed credits, then no deposit is required. Broker-dealers may not use customer margin securities and cash to finance their operations or proprietary trading activities, except to finance other customers’ transactions. Also, creditors of a failed securities broker-dealer cannot claim assets from the broker-dealer’s customer property account. Section 4d(2) of the CEA and CFTC rules 1.20-1.30 provide for the safeguarding of customer funds by requiring such funds to be segregated from funds belonging to the FCM. Similar to the SEC rule, the CFTC segregation rule complements its net capital rule and exists to ensure that FCMs do not mix customer funds with theirs. In the event of a firm’s insolvency, under the rule, customer funds would be clearly identified as belonging to customers and would not be available to creditors of the firm. The rule requires that funds belonging to an FCM’s customers be separately accounted for; segregated as belonging to commodity futures or option customers; and, when deposited with any bank, trust company, clearing organization, or another FCM, deposited under an account name that clearly identifies them as such and shows that they are segregated as required by the act and regulations. Also, each FCM is required to obtain and retain an acknowledgment from such bank, trust company, clearing organization, or FCM that it was informed that the customer funds deposited therein are those of commodity or option customers and are being held in accordance with the provisions of the act and regulations. On a daily basis, FCMs are to compute the customer funds they are required to segregate on the basis of funds received from customers and the daily mark to market of customer positions. CFTC’s segregation rule requires that 100 percent of each customer’s funds be segregated from FCM’s funds. Unlike securities broker-dealers, FCMs generally cannot use one customer’s funds to finance another customer’s transactions. Thus, CFTC’s segregation requirements serve to provide protection through the deposit of all customer funds in segregated accounts. Under SEC requirements, generally the net amount owed to customers is deposited in a bank account with the assumption that money receivable from the broker-dealer’s customers will be collected and paid to the customers having credit balances in their accounts, and any shortfall will be covered by the amount deposited in the bank account set up for customers. In addition, SIPC provides insurance protection for securities customers of broker-dealers in the event there are not enough funds on deposit in the bank account. The commodities industry does not have a customer account government-sponsored insurance program that protects against losses due to FCM insolvency. Together, these customer protection rules are designed to protect (1) customers and other market participants of broker-dealers and FCMs from monetary losses and delays that can occur when the regulated broker-dealer or FCM fails by facilitating the orderly unwinding of a failed firm through liquidation; and (2) the integrity of the securities and futures markets. According to NAIC, capital requirements have been used as an important tool in limiting insolvency costs throughout the history of insurance regulation. Initially, states enacted statutes that required a specified minimum amount of capital and surplus for an insurance company to enter the business or to remain in the business. In some states, a single dollar amount of minimum capital and surplus was applicable to all insurers, regardless of the lines of insurance they wrote. This requirement was, in effect, an entrance requirement and generally did not vary with the size of the insurer or the risks that a company accepted. Thus, the minimum amount of required regulatory capital was unlikely to bear any relationship to the amount of risk on the books of any particular insurer. In the latter half of the 20th century, according to NAIC, changes within the insurance industry itself and the economic environment in which it operated raised questions about the long-term viability of traditional insurance products and led insurers to offer new products. These products included variable annuities, variable life insurance, universal life insurance, single-premium deferred annuities, and guaranteed investment contracts. In NAIC’s view, competition among sellers of these products led life insurers to seek higher returns on their investment portfolios, and some of them sought such returns without sufficient consideration of the accompanying higher investment risks. According to NAIC, an increase in the number and size of life insurer insolvencies from the 1960s through the 1980s led insurance regulators to believe they needed new tools to deal with changes in the industry resulting from new products and investment strategies. Because most states required a fixed minimum amount of capital regardless of the risks undertaken in a company’s insurance and investment operations, regulators believed that the traditional statutory insurance capital requirements that were in place were not sufficiently flexible. By 1990, according to NAIC, a number of states were experimenting with risk-based capital formulas for regulatory purposes. NAIC became interested in risk-based capital in 1989. Its working group and advisory committee developed and tested the life risk-based capital formula, which was approved by NAIC in December 1992, to be used for the first time with the 1993 annual statement filed in March 1994. According to NAIC, the risk-based capital formula is intended to determine the minimum amount of capital an insurer needs to avoid triggering regulatory action. The amount of capital required varies with the risk an insurer is assuming in its insurance and investment operations, as well as the normal risks to which all businesses are subject. The formula requires companies to hold minimum percentages of various assets and liabilities as capital, with these percentages based on the historical variability of the value of those assets and liabilities. Companies are free to make their own capitalization decisions commensurate with their own level of risk tolerance as long as the level is above the regulatory minimum risk-based capital thresholds. In NAIC’s view, its formula, in effect, imposes a minimum and uniform degree of risk aversion on all companies, but the formula also allows companies to operate freely at any given level above the minimum threshold. The NAIC life insurance risk-based formula classifies all of the risks into four major categories: asset risk, insurance risk, interest rate risk, and all other business risk. The formula consists of a series of risk factors that are to be applied, usually as multipliers, to selected assets, liabilities, or other specific company financial data to establish the minimum capital needed to bear the risk arising from that item (similar to risk-weights in banking). The asset risks are the risks of asset defaults and decreases in market value. For example, the risk factor for cash in the formula is 0.003, which indicates that an insurer must maintain capital equal to three-tenths of 1 percent of its cash holdings to absorb the risk of loss in cash in a bank failure. At the other end of the range, the multiplier for publicly traded common stocks is 0.300, which indicates a requirement for capital equal to 30 percent of the value of the stocks to protect against downturns in the market. The formula also includes charges for risks arising from the ownership of subsidiaries and affiliates, which vary with the nature of these entities. According to NAIC data, asset risks represent by far the largest proportion of risk among the four categories faced by the life insurance industry as a whole. The insurance risks, which are unique to the insurance industry, are the risk of underpricing or unfavorable developments in mortality or morbidity. NAIC developed a series of risk factors to determine the capital necessary to absorb those risks that are to be applied to the net amount at risk (face amount less reserves) for life insurance. According to NAIC data, insurance risks are second in magnitude among the four categories of risks for the life insurance industry as a whole. However, for a large number of relatively small companies, this component is the dominant risk-based capital risk. NAIC defines interest rate risk as the chance that a change in interest rates will result in an insurer not earning enough return on its investments to meet its interest obligations under its various insurance and annuity contracts. There is also a risk that changes in interest rates will spur disintermediation. The interest rate risk depends on how closely the assets and liabilities are matched in time. The formula is concerned with the risks related to annuity and pension business. Interest rate risk is third in magnitude among the four categories of risk for the life insurance industry as a whole. The all-other-business-risk category encompasses risks not included elsewhere in the formula. In developing the risk-based capital formula, the working group recognized that all companies are subject to some risks, such as litigation, that are not contemplated in the parts of the formula used for other categories. However, the group concluded that the derivation of appropriate risk factors for most of these risks was not possible. Also, these risks vary from one company to another. Initially, NAIC decided that the only risk factor to be included in the risk-based capital formula would be a charge for the risk of guaranty fund assessments. In addition, the risk-based capital formula also requires the performance of sensitivity tests to indicate how sensitive the formula is to changes in certain risk factors. These tests require the company to recalculate its risk-based capital using revised risk factors for certain specified risks and to report the difference between the basic calculations and the sensitivity tests. The purpose of the tests is to provide additional information for company management and regulators. In NAIC’s view, the true impact of the risk-based capital system is in the Risk-Based Capital for Insurers Model Act (the Model Act), which NAIC developed and recommended that the states adopt. When adopted by a state, this act gives the state’s chief insurance regulator the authority to act on the results generated by the risk-based capital formula. The act requires each insurer to file a report with NAIC; the commissioner of the insurer’s domiciliary state; and the commissioner of any state in which the insurer is licensed, if that state’s insurance commissioner requests it in writing. In their annual reports, insurers are also required to report their Authorized Control Level Risk-Based Capital, which is the total risk-based capital an insurer needs to hold to avoid being taken into conservatorship. (See app. III for additional information on life risk-based capital regulations.) The interviews we conducted with representatives of large, diversified firms; industry and rating agency officials; and regulators indicated generally positive views regarding revisions made to capital rules in banking and life insurance in the past several years to more precisely account for their actual risks (“risk-based” capital requirements). Representatives of banks and life insurers said that the changes were a step in the right direction. However, some of these representatives also said that further improvements were needed. Representatives of many of the large financial firms we interviewed generally said that the current requirements of the net capital rule did not correlate well with actual risks. Several of these representatives said that the net capital rule affected their decisions about where to conduct certain activities, such as derivatives. Bankers, regulators, and industry and rating agency officials we spoke with generally believe the current risk-based capital standards for banks are an improvement over the former requirements, but they still have limitations. For example, one regulator and one rating agency commented that although the current credit risk standards are crude, they are much better than the previous leverage ratio requirement, which did not vary with any differences in risk levels. In the view of the Chairman of the Federal Reserve Board, the risk-based capital accord of 1988 had shortcomings, but it was a genuine step forward at the time it was developed. In the view of the Comptroller of the Currency, the accord highlighted and ultimately helped reverse the slippage in bank capital levels worldwide. It focused attention on the whole concept of risk as a tool for both bank managers and bank supervisors and advanced the effectiveness of bank supervision worldwide. It gave official recognition to the growing importance of off-balance sheet activities in bank operations. Some bank officials we spoke with commented that the current credit risk standards are nonetheless crude and imprecise. The primary reason for this is because the risk-weights are not adjusted for asset quality within each broad class of assets. Institute of International Finance officials said that the credit risk rules offered perverse incentives to banks to take on riskier loans in that they encourage banks to go up the yield curve in pursuit of a return on capital. This means that the bank is making more long-term loans, which tend to have higher interest rates than do short-term loans, thus simultaneously increasing interest rate risk and potential returns. The Federal Reserve Board Chairman noted a number of weaknesses in the risk-based capital structure for credit risk, including its inability to adjust weights for hedging, portfolio diversification, and management controls. Such adjustments are based on changing price volatility and correlation among prices. Another weakness noted by regulators is that the current risk-based structure does not consider all types of risk. Also, it is not flexible enough to respond to new market developments and products. Officials of one bank told us that they do not manage to regulatory capital levels, because the credit risk-based capital requirements provide the wrong incentives by not distinguishing among the quality of products in the same asset class. Officials of two banks commented that they are not constrained by regulatory capital requirements, because assets can always be securitized so capital will not have to be held against them, or they can move to riskier assets in each credit risk category to obtain higher returns. An official of another bank felt that the credit risk standards needed to be realigned to match current credit management practices in the industry. Many bankers we spoke with generally felt the new market risk requirements, which are based on price volatility and correlation, were a step in the right direction and represented a recognition of standard risk management practices and principles. However, one bank told us that even this new requirement will require it to hold unrealistic levels of capital due to the multipliers imposed on the bank’s internal model. One regulator commented that a limitation of the new market risk requirement is that it covers market risk in a bank’s trading book, but not in its banking book, which is where a lot of banks have exposure to market risk. Others commented that in practice, managers adjust their books daily, but the regulatory use of VAR is calculated with a 10-day holding period; thus, they believe it ignores this day-by-day adjustment process. Two rating agencies commented that even after the inclusion of market risk, other important risks to banks, such as operational and liquidity risks, are not quantified. SEC believes the current haircut approach of the net capital rule has several advantages. First, it requires an amount of capital that will be sufficient as a provision against losses, even for unusual events. Second, it is an objective, although conservative, measurement of risk in positions that allows the regulator to compare firms to one another. Third, the current methodology enables examiners to readily determine whether a firm is properly calculating haircuts. SEC believes there are also weaknesses associated with determining capital charges on the basis of fixed percentage haircuts. For example, the current method of calculating net capital by deducting fixed percentages from the market value of securities can allow only limited types of hedges without becoming unreasonably complicated. In this way, the rule does not account for historical price correlation between foreign securities and U.S. securities or between equity securities and debt securities. By failing to recognize offsets from these correlations between and within asset classes, the fixed percentage haircut method may cause firms with large, diverse portfolios to reserve capital that actually overcompensates for market risk. Representatives of the securities firms, rating agencies, and industry association officials we spoke with generally felt that the current net capital rule’s requirements do not correlate well with the actual risks in the activities of firms. Industry officials told us that the current net capital rule does not deal well with hedging or other risk-reducing strategies, which are based on price volatility and correlation. Representatives of two firms commented that regulatory capital rules constrain their business decisions, because they require the firms to hold what they view as excessive capital for certain activities. Three firms told us that the net capital rule has an impact on where they do certain business activities, such as derivatives transactions, foreign exchange, and bridge financing. Some industry officials said they are forced to conduct these business activities in unregulated entities due to the high haircuts imposed by the net capital rule if a broker-dealer were to conduct these activities.Representatives of another firm said the regulatory structure drives the holding company structure, which they consider to be an inefficient and expensive business structure. Firm representatives told us they have businesses in many countries, and they are required to provide information to each country regulator. No authority regulates all of the activities of these firms; therefore, even though firms provide a lot of information to regulators, no regulator knows the condition of the entire firm. One rating agency commented that broker-dealers have shifted risks to other parts of the firm in response to net capital requirements. Representatives of three futures SROs commented that the strengths of CFTC’s net capital rule are that it is easily understood, easily calculated, and easily verified by regulatory auditors. Weaknesses they saw in the rule were that (1) it applies only to funds of domestic customers on deposit with FCMs, so it misses noncustomers and foreign customers; (2) it misses coverage of some risks found in affiliates and internationally; (3) it creates incentives for FCMs to return excess margin funds to customers because such funds can increase an FCM’s segregation requirement and therefore its capital requirement; and (4) it does not deal well with the complexities of exotic instruments. Life insurance companies, rating agencies, insurance regulators, and insurance association officials we spoke with generally felt risk-based capital requirements were a step forward, but improvements were needed. Insurance regulators commented that the main strength of the requirements is that they permit regulators to close a failing company. Similarly, representatives of two firms said that an advantage of these requirements is that they provided regulators a tool they could use before a firm had to be closed by allowing for graduated regulatory action. Representatives of one firm said that the effect of the requirements was to get weaker companies to increase their capital levels. Representatives of another firm commented that the most important aspects of risk-based capital requirements are their objectivity (auditability) and completeness. Representatives of one rating agency commented that the insurance risk-based capital requirements have raised awareness of risk in the industry. Representatives of two rating agencies said they saw a favorable trend in capitalization after the insurance risk-based capital requirements were adopted. One regulator commented that the risk-based capital requirements act as a floor, and firms tend to hold more capital. Life insurance industry officials whom we spoke with generally said that the current requirements do not cover all risks equally well and that some changes are needed. (See ch. 4 for initiatives under consideration.) These officials saw other limitations in the risk-based capital standards, including that the model is static, it is a lagging indicator, it does not address parent/affiliated company relationships, it has difficulty quantifying risks in new products, it does not deal well with diversification or with derivatives-based risks, it is not strong on interest rate risk, and it concentrates too much on credit risk. One regulator commented that because the risk-based capital formula does not address risks evenly, firms have an incentive to alter their business. As discussed earlier in this chapter, regulators are increasingly using the results of risk measurement systems of large, diversified firms in calculations that determine regulatory capital requirements, thus attempting to better link capital with firms’ actual risk. Specifically, bank regulators use the market risk measures of large banks in setting the market risk component of risk-based capital. Also, SEC has recently allowed firms to use option pricing models to calculate some capital charges. Along with other options, SEC and CFTC are exploring possible further reliance upon the results of firms’ risk measurement systems in capital regulation. These explorations are described in chapter 4. Current and possible future use of firms’ estimates of risk in regulatory determination of capital requirements makes the firms’ risk measurement practices an important element of capital regulation for legislative and regulatory policymakers to understand. Chapter 3 describes the approaches being used by some large, diversified firms to measure and manage risk. Unlike regulators, whose focus on the capital levels of firms is driven by regulatory public purposes, firms analyze their use of capital to help ensure that they can achieve their business objective—maximizing the value of the capital provided by stockholders. To do this they must measure and manage risks, returns, and capital. A number of large, diversified financial firms are measuring some risks and returns on a firmwide basis. Among other things, these measurements are designed to enable them to determine the trade-offs among risks and returns that would best enable them to maximize the value of equity capital. Individual risks are often measured by means of a variety of complex quantitative and statistical models that use computer programs to analyze financial data and determine risks. Although different firms use similar overall financial approaches when considering the risks they face, the actual statistical models that the firms use are firm-specific—that is, each firm bases its model on its own data and financial activities. The extent to which large, diversified firms measure and model each risk varies according to the risks inherent in their business activities and their ability to quantify those risks. Market and insurance/actuarial risks tend to be most amenable to the use of statistical models. Credit and liquidity risks also have quantifiable elements. Operational and business/event risks are very difficult to quantify and are not as readily measured; however, some firms are developing measurements of these risks. Regulators and firms alike recognize that models have limitations; however, they believe that using such models can improve a firm’s ability to understand, measure, and manage risks, thereby decreasing the likelihood of some unanticipated risks and losses. Under widely circulated general risk management principles, which were developed in conjunction with financial regulators, firmwide measurement of risk is an integral part of a unified, firmwide risk management system. Such principles include setting limits on trading or other activities and determining capital requirements for business lines on the basis of the measured risks, whenever possible. Modern finance theory suggests that capital provided by investors enables financial firms to fund operations, earn profits, and grow. It also provides firms with a cushion to absorb unexpected losses. Firms need to attract capital from investors by offering a mix of returns and risks that is competitive with the mix available in other investments. Both equity investors (stockholders) and bondholders consider return and risk in their decisions to invest in firms. To attract and keep equity capital investors, a firm tries to manage the trade-off between increasing returns and decreasing risks. The trade-off exists because increasing returns, at a given level of risk, generally increases stock values; increasing risks at a given level of return generally is viewed as lowering stock value. Equity stockholders’ returns are based on the firm’s dividends and capital gains on the stock. A firm using a risky but successful strategy can increase stockholder returns as long as the costs of borrowed funds are less than the return to equity. In contrast, bondholders’ returns are based on interest paid by the firm and capital gains on its bonds. The returns to bondholders are limited, and a successful risky strategy does not increase bondholders’ returns. Equity stockholders’ risks are the volatility of returns and, in the extreme case, losses in bankruptcy or liquidation when assets are sold to satisfy the claims of the firm’s bond and other debt holders. Generally, bondholders’ risk is the chance that a firm’s risky strategies will fail and it will not be able to repay interest and principal. In the event of a bankruptcy or liquidation, the value of the assets may not cover the outstanding principal. Because stockholders can obtain larger returns from risky and successful strategies and bondholders cannot obtain added returns from such strategies, bondholders are less likely to encourage or accept increased risk-taking by a firm. Furthermore, if the firm undertakes risky strategies, bondholders may require a higher interest rate as compensation for the increased risk. The higher interest rate decreases the funding advantage of debt financing and lowers profits for stockholders. Bondholders depend on, among other things, credit rating agencies for evaluations of the creditworthiness of bonds based, in part, on a firm’s leverage. When a firm receives high ratings, such as an investment grade rating from credit rating agencies, the market then allows the firm to pay a lower interest rate on its debt, which lowers costs. Consequently, firms often manage their operations to receive investment grade credit ratings. Several firms that we spoke with told us that they manage their firms to a AA investment rating, which is the second highest investment grade rating. While maximizing stock values, firm management also needs to address the concerns of regulators and others. Regulators’ concerns are important, because regulators can limit firms’ operating freedom by forcing them to allocate capital according to the regulators’ concerns over risk and can require firms to cease doing business, if capital levels fall below the minimum capital requirement. Managers also need to take into consideration the interests of many other parties concerned with the performance of the firm. Employee interests are important, because changing compensation packages can create incentives for excessive risk taking. In addition, financial firms undertake many transactions with each other. It is in each party’s interest to consider the capital levels (relative to risks) in its trading partner or counterparty, because a poorly capitalized entity might fail to complete its financial obligations under a financial contract. Advances in financial theory and information technology have enabled large financial firms to track and evaluate some risks on a more quantitative basis than they could before. Some firms are measuring certain risks on a firmwide basis. According to the financial literature we reviewed and several of the firm representatives we spoke with, large, diversified firms are increasingly doing this because of heightened competition among firms and increased scrutiny of risk management practices by regulators. Firms can use such tracking and measuring to set limits on risk-taking, evaluate the return and risks of specific activities, and allocate capital accordingly—that is, to ensure that the estimated returns are large enough given the estimated risks. As discussed later in this chapter, these activities are embedded in general risk management principles that lay out a management approach and in tools that are designed to ensure that a firm is appropriately addressing its risks. These principles form the basis of a firm’s risk management system that can, among other things, provide timely information on trading positions, risks, and risk-adjusted performance measures. Such principles also encourage firms to develop risk-adjusted performance measures to track the risk-return trade-off. For example, these general principles are embedded in SEC oversight under the DPG and in bank regulators’ capital regulations. A general framework for risk-adjusted performance measures that is used by a number of the larger firms is called the risk-adjusted return on capital (RAROC) system. RAROC is the risk-adjusted profitability of a particular business activity per dollar of equity capital allocated to an activity. This means that at any given level of profit and risk, if managers increase capital allocated to an activity, the RAROC for that activity will tend to decrease. Consequently, RAROC directly measures and takes into account the risk, return, and capital trade-off. As markets become more competitive, as new financial instruments create new mixes of risks and return, and as markets remain volatile and uncertain, managers need improved tools to consider risks and manage them. Therefore, the ability to set limits on trading activity or manage risks is especially important to large financial firms. Generally, models can help managers limit risks and are used to set limits on traders and trading activities. In addition, models can be used to determine needed capital levels on the basis of the measured risks. In the banking, securities, futures, and life insurance sectors, some large firms measure market risk with statistical financial models supplemented by, or in combination with, other types of models. Statistical models apply past data on price changes to determine losses that might occur in the future; they are often used to measure market risks, such as trading in securities, derivatives, and foreign currencies. Such risks are not equally important for all types of financial firms. For example, market risks are important for large securities firms and banks undertaking trading of financial assets. Life insurance companies must often consider interest rate risk (a type of market risk) when underwriting annuities and other investment products that they sell. In contrast, many banks and insurance companies consider credit risks to be more important. Basically, the relative importance of different risks for a firm depends on the products it offers, the business strategies it uses, and the markets it serves. Models have important limitations; nonetheless, in the views of the firm representatives and industry experts we spoke with, they improve a manager’s ability to measure and manage risks, thus decreasing the likelihood of losses due to measured risks that could deplete the capital cushion provided by management to cover losses. A firm’s “value-at-risk” (VAR) is an estimate of the maximum amount that a firm can lose on a particular portfolio a certain percent of the time over a particular period of time. Empirically, this loss can be measured by statistical models as a confidence interval, that is, the percent of the time a certain loss is not likely to be exceeded. This confidence interval implies a corresponding probability that the certain loss is likely to be exceeded a certain percentage of the time. The amount of capital needed to cover this confidence interval is often called economic capital-at-risk. Using the confidence interval approach, a firm might specify a 1-day time horizon with a 99 percent confidence interval—the percent of the time that a specified loss is not likely to be exceeded. This calculation might yield a $1 million loss that on average would not be exceeded more than 1 out of every 100 trading days. To ensure that this 1-in-100 chance of a $1 million loss would not create a financial problem, the firm could assign a $1 million capital buffer. If the firm wants to lessen the chance that the allocated level of capital will be exhausted, it could increase the confidence interval, increase the capital set aside, or change its trading strategy to create less risk. In contrast, if the firm wanted to increase the expected profits, it could decrease the confidence interval, lower the capital set aside to cover possible losses, or change its trading strategy to create greater expected profits while accepting the added risks. According to the modeling literature, the four main approaches to VAR modeling are the correlation or parametric method, the historical method, the historic simulation method, and Monte Carlo simulation. VAR models can be based predominantly on the correlations among asset prices and the effects of such correlations on the risk in the firm. In addition, VAR estimates can be based on historic simulation or Monte Carlo simulations that show how changes in several fundamental economic variables or factors would affect the financial condition of the firm. Most VAR models depend on statistical analyses of past price movements that determine returns on the assets. The VAR approach evaluates how prices and price volatility behaved in the past to determine the range of price movements or risks that might occur in the future. This VAR approach is based on price variances and, in some cases, covariances among the prices that create market risks. This approach uses statistical estimates of the variances of asset prices and the covariances among asset prices to summarize the overall market risk faced by the firm. The correlation method assumes that the statistical distribution of asset returns is normally distributed and that the variance-covariance matrix completely describes the distribution. Assuming a normal statistical distribution simplifies the analysis and the computation of the VAR estimates, because it assumes that returns are symmetrically distributed around the mean and the dispersion of returns above and below the mean are similar. The historical method rejects the use of the normal distribution, because much empirical research on the statistical properties of asset returns suggests that returns are not normally distributed. The evidence suggests that high and low returns are more likely to occur than would be predicted if a normal curve assumption were used. Evidence also suggests that in many cases, the actual returns are more likely to be negative than would be predicted if a normal curve assumption were used. In the historic method, the VAR is calculated by finding the lowest returns in the historic data. Using historic data tends to produce higher VAR estimates. This occurs because, empirically, the normal curve assumption underestimates the likelihood of larger losses. Implementing the historical approach requires added historic data that can be expensive to obtain or even nonexistent. The returns on particular instruments often cannot be used to determine the VAR estimate. If an institution is large and complex, it may be impractical to maintain historic data on all of its instruments. Furthermore, historic data may not be available on new or innovative instruments that the institution is introducing. In such cases, VAR models must include information about the historic distribution of economic risk factors that will determine the risk created by new instruments. Such risk factors are the fundamental economic creators of risk. For example, for a bond denominated in a foreign currency, the risk factors are foreign exchange rates and interest rates. For a Standard & Poor’s 500 option, the relevant risk factors are its volatility, the dividend yield on the index, and the risk-free interest rates. In the case of banks, when new instruments are present, the bank can develop a VAR model based on the statistical distribution of risk factors and the current composition of the bank’s portfolio of activities both on and off the balance sheet. However, the use of historic simulation is limited by the bank’s inability to change assumptions about fundamental risk factors. Firms often are subject to several risks at one time. To address the simultaneous effects of several risks, firms tend to develop Monte Carlo simulations. VAR models based on Monte Carlo simulations start with management identifying a series of changes in several fundamental risk factors that can simultaneously affect the firm. The analysis of the effects of these factors is determined in a mathematical model in which equations show how changes in the fundamental risk factors affect the firm’s cash flows, financial condition, and remaining capital. On the basis of statistical analyses of how market prices have varied in the past, the Monte Carlo approach to VAR estimation is designed to show how the firm will perform in the future by letting managers evaluate how the firm would perform under thousands of different economic conditions. Monte Carlo approaches to VAR estimation also display the effects of nonlinear risks—risks that grow more than proportionately with movements in the underlying risk factor. Such risks are found in derivatives contracts and in the options embedded in financial products. Identifying such risks can help the firm to identify the mix of conditions and strategies that would cause the greatest harm. On the basis of the Monte Carlo estimates of VAR, the firm can adjust trading limits to avoid excessive risks or create a better risk-return trade-off. VAR models are commonly backtested to evaluate the accuracy of assumptions by comparing predictions with actual trading results. Backtests determine whether and how well the models’ results compare to a firm’s historic daily trading results. Backtests provide information retrospectively about the past accuracy of an internal model by comparing a firm’s daily VAR measures with its corresponding daily trading profits and losses. Any VAR-set limit that is exceeded by trading losses at a greater frequency than indicated by the chosen confidence level indicates that the model is not measuring expected losses well enough. According to the financial literature and our interviews, the limitations of the VAR model include a dependence on past data to estimate possible future losses and possible errors caused by simplifying statistical assumptions. The VAR calculation and estimated losses from VAR models are based on the past behavior of prices and price volatility. If price patterns are changing now or will change in the future, estimates of potential losses based on past price changes will be incorrect. As a result, the risk managers at the firms told us they must continually update their statistical estimates and monitor for changing price patterns that affect losses predicted by VAR models. Some VAR calculations are simplified by assuming returns are distributed normally. Such simplifications ease data needs, lower computational costs, and are easier for those less familiar with advanced statistical modeling techniques to understand. However, such assumptions can result in the model underestimating the probability and extent of large losses. To avoid this problem, several of the firm representatives we interviewed said that they use Monte Carlo simulations when necessary because such simulations take returns that are not normally distributed into consideration. Simplifying assumptions also limits the ability of some VAR models to measure risks that do not vary directly or linearly with price changes. For example, gains or losses on stocks held in portfolios vary directly or linearly with market prices. As the market prices of stocks increase, the value of the stock or foreign currencies held by a firm increases in direct proportion. Such direct, linear relationships also exist in foreign currency trading, a common activity of many large, diversified financial firms. In contrast, losses on options and financial contracts with embedded options can be nonlinear and need not move proportionately or linearly with interest rates or other prices. Options have risks that are nonlinear; for small price movements there may be no losses for the firm, but for larger price movements the firm can suffer large losses. Similarly, interest rate risks in certain financial products can be nonlinear. For example, for small declines in interest rates, mortgage prepayments will not accelerate. However, for large declines in interest rates, prepayments can accelerate quickly and create large and nonlinear losses for a firm holding mortgages. Representatives of several firms told us that nonlinear effects in certain other types of financial options affect the accuracy of VAR modeling. Firms use stress tests and scenario analyses to help validate or cross-check the reliability of VAR models. Stress tests measure the potential impact of various large market movements on the value of a firm’s portfolio. Such tests are a useful tool for identifying exposures that appear to be relatively small in the current environment but that grow more than proportionally with changes in risk factors. Scenario analysis generates forward-looking “what-if” simulations for specified changes in market factors that quantify revenue implications of such scenarios for the firm. Stress tests are based on a series of mathematical equations that show how changes in fundamental economic factors would affect the financial statements of the firm over time. Stress tests determine whether large changes in underlying key factors would lead to losses that could put the financial firm at risk of failing. The level of key economic factors used in the stress test can be based on (1) past economic situations in which key economic variables have affected a firm’s financial condition or (2) management’s judgment. When using past economic situations to determine the level of key economic variables, risk managers may use the results of statistical analyses to help decide what factors to use in the test and how large the stress should be. The values or the risk factors used in the stress tests can be based on management’s judgments and statistical analyses of the variability of the risk factors in the past. Some stress tests apply Monte Carlo simulations to determine how often and how quickly a firm will fail when subject to stressful economic environments. The financial literature and our interviews with risk managers told us that all models are limited to the extent that they rely on historic data and pricing patterns that may not reflect future economic conditions and risks. In addition, all models are limited by the quality of the data available, the computation power available, and the ability of analysts to develop mathematical models to accurately reflect financial risks and returns as economic conditions change. Several of the risk managers we met with stressed the importance of the risk factors used in a firm’s internal modeling. Because a firm’s internal system cannot effectively track all of the risks the firm is exposed to, risk managers choose those they believe are the most significant, such as equity and foreign exchange positions and the yield curve slope. Models also offer benefits. Managers using models are able to take a more disciplined approach to the overall operations of the firm. Models encourage and permit risk managers to simultaneously consider the risks and returns in individual assets or portfolios and their interactions, which, in combination determine the overall risks and returns of the firm from market risks. Our interviews with industry association officials who tend to represent smaller financial firms suggest that small companies may be more likely to hold assets until maturity and less likely to realize the market losses in their portfolios when the market value of assets decreases. According to these officials, these companies may not find it necessary to undertake such market risk modeling, because their risks and long-run profits are not driven by changes in market prices and returns. Instead, their risks may be concentrated in credit risk, insurance risk, and operational risks, which have not been quantified or modeled as extensively as market risks. Traditional credit risk management at banks, securities firms, and insurance companies has most often been based on analysis of standardized information reports and judgments by experienced credit officers of the creditworthiness of borrowers and any collateral against the loan or bond. On the basis of such judgments, firm managers have set limits on financial positions and developed plans to manage credit risks. Increasingly at large, diversified firms, traditional credit risk management approaches have been augmented by credit-scoring models for certain classes of homogeneous loans, such as credit card, automobile, and residential mortgages. In even fewer firms, models have also been applied to evaluating the credit of companies with publicly traded stock. Traditional credit analysis is based on standardized reports and credit officer judgments. In most firms, the credit quality of a particular loan is to be judged by a reviewing officer and placed into one of several credit categories. Categories range from risk-free or low risk to potential or full loss. In rating creditworthiness, credit risk is exclusively the risk of a loss on a loan due to a default and is not the risk due to price volatility. A particular loan can be reassessed on the basis of either the changing condition of the borrower or changes in the economy that may affect the likelihood that the loan will be repaid on a timely basis. When considering the risks from a particular loan or financial position with a firm, lenders generally consider all the positions with that firm, because a credit problem in one position with a firm will usually be associated with credit risks for all positions of the entire firm. For example, in a situation where a firm has several financial interactions with a bank, such as a commercial loan, a mortgage, and a foreign exchange transaction, if one of these interactions appears uncreditworthy, it can affect the others. Commercial banks, securities firms, and life insurance companies that we interviewed told us they used the traditional approach to credit analysis. Each firm said it applied a consistent evaluation and rating scheme to all credit decisions. The firm produced aggregated results on the overall credit portfolio. A typical bank might use a rating system with up to 10 rating categories that are defined from low to high risk. Consistent application and updating of the ratings are part of the process. In some firms, both the borrower and the instrument are rated separately. Part of the rating addresses covenants or limitations in the contract and collateral used to secure the contract. Given consistent application of the ratings by its credit officers, a bank can produce a report of the credit risk in its loan portfolio at any time. The report changes as loans enter and exit the system and as ratings of particular loans change over time. According to one source, this credit quality report is most meaningful when credits are monitored and periodically reviewed by a risk management group or function. Insurance companies tend to be very focused on credit risk. Because insurance companies’ credit risk often appears in bonds and other traded instruments, rating the instruments is one way to address credit risk. Credit ratings needed by the insurance companies are often performed by the Securities Valuation Office of NAIC. In many financial firms, credit analysis traditionally was done on a loan-by-loan basis. Such an approach ignored the fact that all loans in the same region or industry tended to become less creditworthy at the same time. However, given sectoral losses of the 1980s in real estate and the petroleum industry, firms are increasingly concerned that many loans concentrated in the same region or industry may create losses at nearly the same time. To address these concerns, some firms are undertaking concentration reports by industry, and work is under way to improve the industry classification codes needed to produce concentration reports. Credit scoring applies formal statistical procedures to the credit decision process. Credit scoring models, based on statistical analyses, use data on the borrower found in credit reports and loan application information to determine whether or not a loan is likely to be repaid. In addition, credit scoring can be used to adjust terms on the loan such as downpayments and interest rates. Such models are often used in underwriting credit cards and mortgages. Credit scoring is most applicable to classes of loans in which there are numerous loans that are frequently underwritten with similar terms. Within each class, the loans are relatively small compared to the total holdings in the class, made frequently, and easily statistically analyzed because the loans have relatively homogeneous characteristics. This homogeneity occurs because the loans are not custom-tailored to the borrower or to the collateral asset. Some banks and consulting firms that we spoke to have developed a portfolio approach that rates the creditworthiness of loans to larger corporations whose bonds and stocks are traded regularly in the financial markets. This approach addresses the correlations among the creditworthiness ratings of individual assets in the portfolio. In the first step, the portfolio approach uses the traditional approach of rating each loan or financial instrument on a case-by-case basis to generate its inputs by determining the credit risk from each obligor (borrower from the bank). In the next step, the portfolio approach accounts for the credit risk across the portfolio based on the correlation of credit quality across obligors. In this way, this approach takes into account and quantifies the benefits of portfolio diversification. It is similar to the portfolio approach already used in market risk modeling such as VAR. The portfolio approach to credit risk may depend on stock and bond price information and ratings by credit rating agencies, such as Standard and Poor’s. Using statistical methods, the portfolio approach estimates the probability of default on an instrument and the probable loss from that instrument if a default occurs. The approach uses the credit rating from a credit rating agency or the internal credit rating by a bank evaluating its own loans. Credit rating agency ratings and bank loan evaluations are based on reviews of the financial books and other pertinent information gathered during the rating or loan application process when a firm is issuing bonds or applying for a loan. Such ratings are not driven by market prices or by the volatility of market prices. The portfolio approach also uses market information on stock prices to estimate the total probability of default on the basis of the correlations of defaults among the component loans in the portfolio. By taking these market prices into account, the portfolio credit rating can directly take into account the correlation among credit risks because it can address the correlation among stock prices. Such correlation information permits risk managers to develop more economically efficient portfolios by improving expected profits or lowering the risk of losses from the total loan portfolio. As with market risk modeling, assumptions about the probability distributions and the correlation among the risks affect the estimated potential losses due to credit risk embedded in any particular portfolio. The portfolio approach to credit risk enables a risk manager to quantify and control credit concentration risk; consider concentrations on the basis of industry rating category, type of instrument, or other factors; interpret credit risk in terms of needed capital as is done in market risk calculations; and evaluate investment decisions more precisely in terms of risks, returns, and capital. Liquidity risk analyses are most concerned with the effect of a sudden crisis that arises when lines of credit may be closed, assets can be sold only at a loss, and other new funding sources cannot be found. In a liquidity crisis, a firm must be able to sustain itself and obtain cash as needed when the markets, in general, appear much less willing to buy assets from the institution or make loans to the firm experiencing the crisis. Many firms develop worst-case simulations (i.e., stress tests) or models to investigate the implications of a severe loss, which affects credit ratings, or a systemwide crisis that would affect all sources of liquidity due to a flight to quality throughout the economy. The worst-case “scenarios” are based on simulations of firm cash flows. In each worst-case cash flow analysis, a firm would attempt to estimate the immediate funding shortfall associated with a severe loss and a crisis that is systemwide. In a worst-case analysis, a firm attempts to measure the speed with which it can acquire needed liquidity during a crisis. Such liquidity might be based on liquidating assets—that is, shrinking its balance sheet—or estimating sources of funds that would still be available during a crisis. The results of such worst-case analyses or simulated crises are often reported in estimated days of exposure or days of a funding crisis. On the basis of the liquidity problems that arise in worst-case simulations, managers can alter current operations to forestall liquidity problems in a crisis, adjust the liquidity of current asset holdings, or create more secure lines of credit. Such simulations are not used to forecast future problems but rather as a planning tool to understand what a liquidity crisis might entail. In the view of a number of firm representatives we spoke with, such simulations or worst-case studies are imprecise but essential to a firm in the event of a substantial change or deterioration of its financial condition. Some firms we spoke with used such simulations to determine what backup lines of credit, which cannot be cancelled, are needed to ensure liquidity or funding during a crisis. One large securities firm suggested that financial firms can fail in a crisis when liquidity is lost even though other fundamental risks might not be present. Another large firm emphasized that broker-dealers depend on liquidity when managing market or trading risk because, without liquidity—the ability to buy and sell financial assets without large losses—hedging and other risk-reducing strategies do not work. Firms’ representatives told us their firms often maintain an equity cushion above regulatory capital levels to ensure the constant availability of sufficient cash to deal with liquidity problems or to undertake a large and potentially profitable deal. Another large, diversified firm emphasized that during crises, investor flights to quality occur and firms without strong credit ratings may not be able to refinance short-term debt or fund operations. Firms that maintain high levels of capital are generally considered to be more creditworthy. Other firms told us that liquidity is an amorphous term and cannot be addressed by VAR or other mathematical models. A representative of a large industry group said that liquidity risk is somewhat quantifiable but not to the same extent as credit and market risk. According to several insurance industry analysts, liquidity is not as big a concern with many life insurance companies as it is with other financial institutions because life insurance policy liabilities are less liquid than life insurers’ assets. Life insurance companies issue policies that have high surrender charges that tend to limit redemptions. A decade ago, when interest rate movements created options that encouraged early redemptions, illiquidity was more of a problem for life insurance companies. New policies are now written that are designed to bring returns on products into accord with market rates. In addition, policy loans are often charged variable rates that track the market instead of fixed rates in order to prevent losses. In the past, life insurance companies generally used conservative static assumptions regarding loss distributions and interest rates. This approach was ill equipped to deal with the interest rate volatility of the late 1970s, according to several insurance company representatives we spoke with. Life insurance policies are full of options—settlement options, policy loan options, over-deposit privileges and surrender or renewal on the part of the insured, and discretionary dividend options on the part of the insurer. When interest rates are volatile, these options increase in value and thus are more likely to be exercised. Traditional actuarial valuation methods that assume interest rate stability incorrectly value these options when interest rates are volatile because the companies do not consider or calculate the economic value of the options. By assuming stable interest rates, insurance companies tended to underprice their policies. Today, the standard valuation techniques deal explicitly with the interest rate risk options embedded in policies. These standard valuation techniques use statistical modeling approaches such as VAR based on correlations and Monte Carlo simulations discussed earlier in connection with market risks. Although the firms we interviewed emphasized that business risk and operational risk were crucial concerns, most acknowledged that they did not or could not effectively measure these risks. Several firms described how they were measuring market, credit, and liquidity risks and explained that their firms did not measure other risks, such as operational or business risks. Several suggested that they were not convinced that operational and business risks could ever be measured to the same degree that market, credit, and liquidity risks were measured. In almost all the interviews we conducted, including all those with regulators, we were told that because measurement of business/event and operational risk is difficult, managers’ judgments are crucial to managing these risks. A securities-based firm said that most failures in this industry were not created by market risks; rather, operational problems led to the failures. One bank’s risk manager suggested that business risk was an amorphous term and thus could not be measured or placed in a mathematical or statistical model. This bank does, however, include business risk in its Risk-Adjusted-Return on Capital (RAROC) system. A manager of a large and complex insurance-based financial firm said he was not yet comfortable with how his firm measured such risks. Another bank, which said it is vigorously trying to model its risks, told us that it has not yet quantified operational risk. A major consultant to the financial services industry concluded that operational risks are hard to quantify because the risks are embedded in (1) the operating and accounting systems, (2) the models, (3) staff behavior, (4) the compensation systems that create incentives to undertake various activities that affect both firm and employee risks and returns, and (5) the managers’ abilities to foresee the consequences of the interactions among these factors. Officials of one bank we interviewed told us they were quantifying business/operational risk by using revenue volatility as a proxy for impact of risk on business results. In practice, risk measurement approaches differ across the risks faced by firms, and not all risks are quantified to the same extent. For example, under widely circulated general risk management principles, firms are to monitor and manage all risks but are expected to explicitly measure and manage only market, credit, and liquidity risks, as discussed above. Firms monitor and measure other risks using a more qualitative approach because, to date, quantification of these other risks has not progressed enough to be commonly used even at large, diversified firms. Because financial firms are as yet unable to quantify and model all risks, a fully quantified approach to determine needed capital has not yet been developed. Nonetheless, the general framework called RAROC has been developed for such firmwide risk assessments across all risks and products (see app. IV for a discussion of this framework). However, as long as no common basis exists for measuring all risks, firms cannot fully integrate their risk measurement and management systems in a firmwide, cross-risk, and cross-product analysis. Thus, given the different approaches and levels of sophistication currently available for measuring and managing risks, managers’ judgment and effective risk management approaches remain a crucial determinant of risks, returns, and needed capital levels in each financial firm. As mentioned earlier, firmwide risk measurement is an integral part of a unified, firmwide risk management system under widely circulated general risk management principles. Our discussions with regulators and representatives of large, diversified financial firms indicated that these firms accept the approach of the general risk management principles and are applying these principles in the design of their internal risk control function. However, to date, not all firms we spoke with have fully implemented the risk and capital measurement systems laid out by the principles. General risk management principles lay out a management approach and tools that are designed to ensure that a firm is appropriately addressing its risks. There is a common set of five broad risk management principles:1. A structured framework is to be established to link a firm’s business strategy and operations to its risk management objectives. 2. Centralization of the risk management function in one dedicated staff office is needed. 3. Risk measurement, risk reporting, and risk controls are needed to permit managers and others to evaluate the implications of the risks, returns, and capital levels in the firm. 4. Operations systems are needed to support the risk management function. 5. Risk management systems are needed to provide needed data on a timely basis. Under these principles, the firm’s risk management strategy is to be based on a framework of responsibilities and functions driven by the board down to operating levels, which covers all aspects of risk. The basis for this principle is the view that unless the board is fully integrated in the risk management approach, the firm’s managers and employees will not be fully committed to risk management. To emphasize the importance of risk management, the principles state that a risk management group composed of senior managers is to be created. In accordance with the principles, the risk management function is to be fully integrated into a firm’s operations. The day-to-day responsibility for risk monitoring and risk evaluation is to rest with the risk management function, which is to report to a risk management group—a special committee of senior managers. The role of the risk management function is to implement policies associated with specific risks, such as market risk, credit risk, liquidity risk, operational risk, and business/event risk. Its purpose is also to ensure that trading is within approved limits and that risk limits and policies are properly understood and evaluated before transactions are undertaken. The principles lay out a framework for risk measurement, reporting, and control of risks; quantification of market, credit, and liquidity risks; and development of the capability to aggregate and monitor exposures on a firmwide basis. The principles require a firm to set a comprehensive set of limits to ensure that risk exposures remain within agreed-upon boundaries set by the board or risk management group. In addition, the firm needs a mechanism for evaluating firm performance on a risk-adjusted basis to address the trade-off between return and risk. That is, the firm must develop a method to simultaneously measure and manage the trade-offs that can exist between return and risk on a firmwide, business-unit, and product-specific basis. The principles call for a risk management system to generate, on a timely basis, information on the firm’s trading positions, risks, and risk-adjusted performance measures. Such information is to be available to the risk management group; risk management function; and other end users of the information, such as traders, credit risk departments, or managers of trading units. Under the principles, firms are to develop a comprehensive set of operational controls, because firms engaged in trading activities often encounter difficulties as a result of operational control problems rather than measurement problems. Such operational controls are meant to ensure that risk limits are set by the board and, once set, are not violated. To guard against operational problems, it is important for firms to rigorously establish controls that limit risk-taking and unauthorized activities throughout the firm, according to the principles. Firm officials we met with consistently mentioned these principles and provided firm-specific examples to illustrate their importance. For example, many firms and analysts emphasized that although their approach to risk management is constantly evolving, it is of paramount importance that senior management determines the level of risk that the firm will accept and communicates this information firmwide. Representatives of several firms commented that a central committee, which reports to the chief executive officer, monitors their risks. Firm representatives stressed that numbers are important, but good communication, internal controls, and management judgment are what really matter. Through our interviews with industry representatives, regulators, and others as well as our review of pertinent literature and other documents, we sought to identify significant issues in capital regulation. We group the issues we identified into the following three categories: differences among financial regulators in terms of the risks each focuses on and the purposes of its capital rules; differences between regulators’ and firms’ estimates of risks and needed capital, and in their views of risk and how it should be managed, and concern about how regulatory capital rules are administered. The principal issue in the first category is that as firms that have traditionally been in different sectors of the financial services industry increasingly offer similar products and take on similar risks, differences in capital regulation among their regulators may have unintended competitive implications for these firms. Issues in the second category include a concern that current regulatory capital requirements that are not adequately sensitive to the risks inherent in a firm’s particular products or activities may create inappropriate risk management incentives for firms and, in extreme cases, could even lead to increased risk-taking. A related issue concerns the possible increased use by regulators of a firm’s internal estimates of risk in setting regulatory capital requirements, because financial firms and regulators have somewhat different purposes for capital and tolerances for risk. The third category, administrative issues, includes questions about whether it makes sense to apply the same approach to capital regulation for firms of all sizes and degrees of complexity. It also includes questions, such as how can the regulators properly oversee the validity of the internal statistical models that firms use to meet regulatory capital requirements. As competition within and among different financial sectors has increased and as large, diversified firms have improved their ability to measure and manage risks and capital, financial regulators are responding by exploring possible changes to capital requirements. Many initiatives aim to make capital requirements more sensitive to the risks firms face in their activities; other initiatives represent fundamentally different approaches to capital regulation. In an environment of increasing competition across financial sectors and national borders, large, diversified financial firms increasingly offer similar products that pose similar risks. At the same time, individual firms and their affiliates are regulated by a variety of domestic and foreign regulators, and some are unregulated. Differences in corporate legal systems and markets also contribute to international differences. Concern about differing capital requirements for firms with similar products posing similar risks is one part of an ongoing “level playing field” debate in financial modernization. On a level playing field, firms and markets compete without advantages that result from government backing (such as government-backed deposit insurance) or disadvantages that result from burdensome regulation. At the same time, regulators acknowledge that differences in regulatory purposes have implications for capital requirements that could limit achievement of a level playing field. As discussed in chapter 2, the specific objectives of the various financial regulators and their approaches to regulation differ. For example, bank regulators are concerned with maintaining the safety and soundness of the banking and payments system and protecting the deposit insurance funds; and securities and futures regulators are concerned with investor protection and ensuring the integrity of the securities and futures markets, respectively. Financial regulators and other experts we interviewed discussed the appropriateness of having similar capital requirements for banks, which are covered by government-backed insurance funds, and other firms that are not; or whether it is appropriate for capital requirements of banks, which are part of the payments system, to be similar to capital requirements for firms that are not. Traditionally, bank regulation has been more concerned with systemic risk than has regulation of other financial entities. Some experts have argued that capital regulation must be stricter on entities that pose greater systemic risk than on those that do not. Also, financial regulators were concerned that different domestic and foreign capital standards for the various types of financial firms create incentives for firms to change operations in ways that change their regulator, such as moving business overseas, to avoid or offset capital requirements they believe are costly and excessive. Different regulators may have different capital standards for the same product. In some situations, a firm facing the higher capital standard has an incentive to move its activities in that product line into an affiliate that has a different regulator or one that is unregulated. However, in banking, all affiliates within the holding company fall under the holding company capital standards. If a bank believes the standards are too high for a certain product, it may choose to abandon that product line; or it may restructure its transactions to provide a similar service that carries a lower capital requirement. An important issue for regulators is how to establish capital requirements that meet their purposes without requiring either excessive or insufficient capital for the risks involved. To a large degree, to increase the value of the shareholders’ equity in competitive markets, large, diversified financial firms have increasingly used statistical and mathematical models to measure and manage economic risks and to determine their optimum capital levels. As firms have been able to apply more sophisticated risk measurement tools, some said they have become increasingly aware of a discrepancy between their internal estimates of risk and the capital needed to support certain activities and the regulatory capital requirements for those activities. Even though firms may hold more total capital than regulatory minimums call for, regulatory capital requirements may impose higher capital levels for some activities than the firms believe to be appropriate. The difference between amounts of capital allocated by some financial firms and regulatory capital requirements reflect, in part, differences between the firms’ primary objectives and the purpose of regulators. As discussed in chapters 2 and 3, financial firms and regulators agree that capital serves as a buffer against unexpected losses. However, the primary use of capital for firms is to maximize the value of their shares for stockholders by choosing the best mix of risk and returns. Regulators, on the other hand, impose minimum capital requirements to serve the public interest. Currently, financial regulators use a “building block” approach in setting capital requirements—that is, capital requirements are determined largely on the basis of broad classes of risk, and the total capital requirement is the sum of requirements for each risk. Many firms and regulators have argued that this building block approach is inappropriate, because the total risk in the firm is based on the interactions of all risks in the firm’s portfolio, and risks need not be additive. We did not identify any firms that were yet able to hedge across different risks—for example, hedging credit exposures and market exposures against each other. However, some firms said they have developed hedging strategies that allow them to decrease risks by hedging the same risk within and across portfolios—for example, hedging interest rate or foreign exchange risk in different portfolios within the firm. These risk-reducing strategies are often not recognized in existing building block regulatory approaches to setting minimum capital requirements. Thus, because these approaches do not recognize the possibility that total risk may be less than the sum of individual risks if risks offset each other, they could lead to excessive capital requirements. In both the banking and securities/futures sectors, capital regulations contain formulas that apply single risk-weightings to a broad range of riskiness within a single category. For example, in banking, the same 8 percent capital requirement is imposed on all unsecured loans to private commercial borrowers regardless of individual creditworthiness, with the result that a high-risk/high-return loan carries no more regulatory capital than a low-risk/low-return loan. As a result, the regulation might give firms an incentive to seek the highest returns within a broad class regardless of underlying risk; or to adjust activities (e.g, develop new products and/or change operations or corporate structures) in a way that reduces or escapes capital requirements. In other words, firms may adjust business to achieve the lowest regulatory capital cost rather than an optimal balance of risk and capital. Also, the securities net capital rule requires registered broker-dealers to apply a 100-percent haircut to any portion of the trading profits, to the extent the profits are unsecured, reflecting SEC’s emphasis on liquidity in its net capital rule. Moreover, if capital requirements are not adequately sensitive to risk, they may require either too much or too little regulatory capital for the activities being covered. For example, capital requirements that require firms to hold more capital than they believe to be warranted by the risk can cause them to reorganize their structures, resulting in less regulated financial markets as firms move operations outside of regulated entities. Because securities firms consider the 100-percent haircut for OTC derivatives transactions excessive, for example, they book much of their OTC derivatives business in unregulated affiliates to escape capital requirements and other regulatory oversight for these derivatives activities. Some regulators and we have expressed concern about the lack of regulatory oversight in OTC derivatives activities. On the other hand, capital requirements that are too low to protect against risk may result in firms holding only the required amount of capital. As a result, they may not be sufficiently cushioned against potential losses. At the same time, a relatively low capital requirement may induce some institutions to hold excessive amounts of the asset, thus increasing their exposure to the risk. For example, the calculation of bank capital ratios does not explicitly include the interest rate risk inherent in mortgages and other interest-sensitive assets; this may cause banks to hold more of these assets and fewer assets for which the capital requirement more fully captures all the risks. Although financial regulators are already using firms’ estimates of risk in limited ways, concerns about regulatory insensitivity to risk are leading some regulators to consider increased use of firms’ own estimates of risk for setting regulatory capital requirements. Through our interviews and our review of the literature on risk measurement, we identified a number of concerns with regard to increased regulatory capital requirements that are based on each firm’s own risk estimates. First, the current risk measures used by firms are limited in that they do not measure all risks the firms face. Second, some risk measurement systems may measure some risks incompletely. Third, models used by specific firms are tailored to what each firm sees as its risk measurement needs, so they are not necessarily comparable. Fourth, increasing regulatory use of firm risk estimates could cause firms to modify their models to reduce their regulatory capital requirements. Both the literature and representatives of all of the firms we interviewed agreed that a firm’s risk measurement systems are limited in that they do not accurately measure all of the firm’s risks. Many of the representatives said that because their firms’ risk measures and models address some categories of risks (for example, market and credit risks) and not others (for example, operational and business/event risks), their firms continue to use judgments to determine the overall capital levels they need. In addition to limitations resulting from firms not measuring the same types of risk, some models may not correlate the same type of risk (such as credit or market risk) across the entire firm. In this way, such models may not fully measure all risks within a risk type included in the model. However, as discussed earlier, some firms said they do measure some risks on a consolidated basis—taking into account correlations of the same type of risk wherever it exists in the consolidated firm. Moreover, models may fail to capture major unique market events of low probability that could pose considerable risk, such as currency devaluations in emerging markets. According to some of the firms’ representatives we interviewed, the firms’ risk estimates address expected losses, but they cannot accurately account for the unexpected losses the firm may face. VAR modeling is often based on day-to-day risks and historical experience and assumes that managers regularly readjust their portfolios as risks change. However, the representatives noted that such models can easily miss low-probability events that could result in large losses, which could pose considerable risk. For this reason, the capital levels indicated by the models may not cover losses during a major market event, such as a financial crisis in an emerging market. Moreover, even if the models were capable of totally accurate risk measurement, regulators, who, for example, are likely to be concerned with the systemic risk posed by a low-probability, high-loss event, may require more capital for certain risks than the firms would set aside for that risk. The financial regulators we spoke with are concerned with the consistency of capital requirements across the firms they regulate. When regulators depend on firm-specific models or measures to set capital levels, however, capital requirements may not be consistent across firms with similar risk levels. Each firm that uses internal models may well reflect in its own model the firm’s unique characteristics, such as the particular risk factors it faces. Thus, even when regulators specify the use of common procedures for developing internal models, such as those in the market risk capital requirement for banks, the internal models firms produce differ because each firm designs its model to measure what it sees as its own risk profile. Because a firm’s model was designed for use with a specific risk profile, another firm’s model applied to the same profile might produce a different risk estimate. In addition, both the consistency and the accuracy of these models depend on the quality of the raw data used. The financial regulators are concerned about the dependability of the results of firms’ risk measurement systems, in terms of the accuracy of the results and the transparency in the firms’ use of internal models. To help ensure that the capital set aside for various risks accurately reflects the firm’s risks of possible losses, it is important for risk measures and models to truly reflect management’s own best judgment about the design and use of the models and for the model inputs to be complete and accurate. With regard to transparency in the use of firm-specific internal models, regulators and other experts are concerned that their use of firm-specific risk measures to set minimum capital requirements could give firms an incentive to adjust the internal models they use to determine their minimum regulatory capital in such a way as to reduce their regulatory capital requirements. Such behavior by firms would raise questions about the dependability of the risk and capital measures used by the firms. Firms might undertake such model alterations if the regulatory minimums for certain risks exceeded the capital level managers wished to put aside for such risks, either because their estimates of risk were lower or their risk tolerances were greater than those set by the regulators. The regulators said that if they could ensure that only one model existed within a firm for a particular risk, they could be more confident that the firm’s own true risk estimates were being used to set minimum capital requirements. Our interviews with industry representatives, regulators, and others and review of the literature on capital requirements identified issues in the administration of capital requirements. One issue concerns the reasonableness of using the same approach to capital regulation for firms of a similar type (e.g., banks) but with varying sizes and degrees of complexity. That is, as the activities of large firms diverge from those of small firms, a single standard for all firms may become increasingly inappropriate. As the activities of large firms become more complex, regulators and firms are concerned about proper regulatory oversight of the use of statistical models for regulatory purposes. Regulatory confidence in the effectiveness of capital standards in accomplishing the regulatory purpose depends in part on those standards being auditable and understandable, which is significantly complicated by the use of sophisticated measures and firm proprietary models. In the views of both regulators and other experts, many auditors and regulators may not yet have the expertise needed to verify the accuracy of the measures calculated in the models used in determining minimum capital standards. In addition, depending on their business mix, smaller firms are less likely to have the resources or the need to develop sophisticated models. Part of this issue concerns the costs to the regulators if they adopt sophisticated approaches to setting minimum capital requirements. Financial regulators understand that their adoption of more sophisticated regulatory capital requirements (e.g, increased use of firms’ internal models) would mean increased regulatory costs related to hiring and training regulatory staff. Complexities associated with the increasing use of sophisticated measures and firms’ proprietary models in determining capital requirements could also pose challenges for regulators and industry representatives in promptly analyzing and addressing policy or administrative issues in capital standards. For example, representatives from a number of firms we spoke with said that as their internal modeling and capital allocation processes become more complex, it is more difficult for managers who do not necessarily have the technical expertise to judge the quality of the models, processes, and their results. In the view of the Federal Reserve Chairman, no matter how complex capital requirements become, firms will develop new products to exploit the remaining inevitable distortions in the regulations to lower their capital requirements. As previously discussed, examples of such distortions are the current credit risk-based capital rules that treat all commercial loans as if they had equal degrees of riskiness. As discussed in the next section, some other experts argue that trying to address all of the firms’ potential activities through increasingly sophisticated capital regulation is impossible. They suggest that the use of simplified regulations, such as an incentive-based approach or an approach based on strict supervisory oversight and increased disclosure, would be a better way to implement capital requirements. The importance of these and other issues is apparent in the initiatives discussed in the next section. Some of the issues have more relevance for some of the initiatives than for others. Some of the initiatives are actual proposals, and others are still in the exploratory stage. Because they are all either proposals or ideas being explored, we did not evaluate them. Most of the initiatives discussed below are attempts to make capital requirements more sensitive to risks in firm activities, and others represent new approaches to capital regulation. Banking regulators have noted, however, that in consideration of new approaches to capital regulation, there are both statutory and international constraints on the changes they can make. With regard to statutory constraints, because FDICIA institutionalized regulatory capital using risk-weights plus leverage as a matter of law, regulatory capital with a risk-based component will be an integral part of the overall U.S. supervisory approach until it is changed by Congress. Internationally, U.S. bank regulators have agreed to coordinate their capital regulations with those of the other Basle Committee members, and U.S. regulators are actively involved in the committee’s work. Although the Basle Committee is aware of and is studying many of the initiatives proposed in the United States, none of the U.S. regulators we spoke with expected any unilateral new approaches or any major changes to the Basle Accord or its approach in the near future. Regulatory agencies and SROs are exploring or have proposed a number of initiatives for modifying or changing current capital requirements in banking, securities, futures, and life insurance that would make the requirements more sensitive to the actual risks in firm activities. The banking initiatives range from a proposal that would allow banks to use credit ratings from rating agencies to determine risk-based capital requirements for certain products to taking an approach to measuring credit risk that is based on statistical modeling. SEC and CFTC are monitoring and evaluating the DPG’s voluntary efforts to relate capital to risks. In addition, SEC issued (1) a concept release on the extent to which a statistical modeling approach should be used by broker-dealers to better reflect market risks in their activities; (2) a proposal that would create a new class of broker-dealers, called OTC derivatives dealers, that would be subject to modified capital requirements in connection with conducting an OTC derivatives business; and (3) proposed amendments to the net capital rule regarding the method of computing haircuts applicable to interest rate products. CFTC is also exploring whether the regulatory structure should be changed for OTC derivatives dealers. Two futures industry exchanges have taken steps to make minimum capital requirements more risk-based to reflect the total risks to the FCM. Although life insurance industry regulators have no current plans to fundamentally change their formula-based approach to setting capital requirements, they are working to modify various components of the current risk-based capital requirements. Bank regulators have recently proposed revisions to the risk-based capital standards that, if adopted, would affect the method used to measure the relative exposure to credit risk for certain products. This is in response to the concerns, discussed earlier, about the imprecise nature of the current credit risk-based capital standards, which have created conflicts between the regulators and banks. In addition, regulators are exploring other modifications to the standards that would more precisely measure the credit risks firms face in their activities. In November 1997, the banking regulators asked for comments on a proposal that would revise the risk-based capital standards to allow the use of credit ratings from the nationally recognized statistical rating agencies (e.g., Moody’s Investors Service) to measure relative exposure to credit risk and to determine the associated risk-based capital requirement for certain products. The regulators believe the use of credit ratings would provide a way for them to use market determinations of credit quality to identify different loss positions for capital purposes in an asset securitization structure. Such a change might open the way for them to determine capital requirements more precisely across a wide variety of transactions and structures in administering the risk-based capital system. Because credit ratings may not exist for some nontraded positions, the regulators are also considering some alternative approaches to the use of credit ratings—the ratings benchmark approach and the internal information approaches. Under the first alternative, the regulators would issue benchmark guidelines that banks would use in assessing the relative credit risk of nontraded positions in specified standardized securitization structures. The second alternative consists of two different internal information approaches under which banks would use credit information they have about the credit quality of assets underlying a position to set the capital requirement for that position. The first, the historical loss approach, would take into account unexpected losses over the life of the asset pool. The second, the bank model approach, would base capital requirements for certain positions on the internal risk assessments made by banks’ “internal models” for measuring credit risk. Although regulators have permitted the use of credit ratings for other purposes, these revisions to the credit risk-based capital standards, if adopted, would be the first time banks have been permitted to use credit ratings, benchmarks, or their own internal risk assessments in determining credit risk-based capital requirements. According to a 1997 paper by two Federal Reserve officials, there is increasing discussion in the banking industry as well as the regulatory community about the possibility for further evolution of bank capital regulation. This paper was intended to provide the equivalent of a briefing paper on some of the specific alternative proposals that have been put forward concerning the future of capital regulation. In the view of the authors, the paper was not intended to pass judgment—positive or negative—on any of these alternatives, but it sought to raise issues that are likely to be important as the discussion of the proposals continues. In considering the possibility of such evolution, these officials believe it is helpful to keep in mind several recent changes that they believe will influence possible future changes. First, the overall approach to bank supervision is also undergoing continuing review. For example, the bank examination process has been increasingly focused on risk management and internal controls. Second, banks today, especially large internationally active banks, face a number of different types of risks. Some of these risks, such as the market risk of traded instruments, are easier to quantify than others, such as operational risk. In addition, the computer systems and analytical abilities of these banks to measure and manage these risks are evolving themselves. One modification under consideration would be to continue to extend and revise existing risk-based standards with the goal of improving the extent to which the risk weights for credit risk reflect the true economic risk of the underlying positions. As discussed in chapter 2, many bankers have commented that some of the current risk-weights do not accurately reflect the risk inherent in particular assets, and some have argued that the current risk-based capital framework introduces distortions into the risk-return trade-offs that banks face. Such changes may help address the issue of inappropriate incentives being created for firms by the current risk-weighting scheme. However, in the view of these Federal Reserve officials, it is not clear that it is possible to better correlate the regulatory risk calculation with true economic risk. In order to eliminate inefficiencies in the risk weights, many believe it would be necessary to mark loan portfolios to market. However, there is no consensus that this is desirable or feasible because there is no readily available resale market for most loans and, therefore, no current market value for them. Some in the industry are exploring the possibility of using portfolio-based models of credit risk for regulatory capital purposes, much as banks’ internal models of market risk are now being used. According to these Federal Reserve officials, these credit risk models have yet to be empirically tested. Such testing appears to require long periods due to the time period required to observe changes in credit risks. One model, called CreditMetrics, was introduced in 1997 and was accompanied by statements that a primary goal was to encourage a change in regulatory credit risk capital calculations. One problem in the development of credit risk models noted in the paper is that data for such models are sparse. Also, it is not clear what the appropriate holding period is in the case of credit risk. Another issue, noted in the paper, is how far to take this modeling—is there value in attempting to include operational risk, for example, into such a framework? SEC, CFTC, and some SROs are continuing efforts to revise regulatory capital charges to (1) reflect the economic risks being undertaken by broker-dealers and FCMs more precisely and (2) reduce incentives for some broker-dealers and FCMs to conduct certain activities through their unregistered affiliates to avoid capital requirements that apply only to registered broker-dealers and FCMs. For example, the current SEC and CFTC capital requirements consider any net interest payments due a broker-dealer or FCM from interest rate swaps to be unsecured receivables. As such, they are deducted from the firm’s GAAP equity (which is the equivalent of a 100 percent capital charge). Many broker-dealers and FCMs consider this charge to be an excessive capital requirement. However, if these same swaps were to be conducted in an unregistered affiliate, they would not be subject to capital requirements. Value-at-Risk (VAR), a statistical modeling approach, is increasingly being used by a few large broker-dealers in varying ways to measure, control, and report the amount of market risk incurred in their trading activities. According to market participants, SEC’s current net capital requirements do not accurately reflect the economic risks being taken by a broker-dealer’s activities, because such requirements do not incorporate modern finance and risk management techniques. Because the current net capital rule generally does not recognize portfolio diversification, correlation among asset prices, or the many hedging strategies firms employ to reduce their risk, these market participants argue that capital requirements and risk do not always move in the same direction; i.e., if risks increase, then capital requirements should increase and vice versa. Accordingly, certain broker-dealers and their industry associations have been urging SEC to allow broker-dealers to use their internal models for determining regulatory capital requirements for market risk (like banks). In response to such industry urging, SEC issued a concept release in December 1997 on the extent to which a firm’s statistical models might be used in setting capital requirements for a broker-dealer’s proprietary positions. The statistical modeling approach is intended to more accurately reflect the risk-return trade-off and the relationship between risks and regulatory capital. In the same concept release, SEC discussed the possibility of adopting a “precommitment” feature similar to that being considered by banking regulators. In 1995, DPG member firms, in coordination with SEC and CFTC, developed a self-regulatory framework to address public policy issues raised by the OTC derivatives activities of “unregulated affiliates of SEC-registered broker-dealers and CFTC-registered FCMs.” DPG’s voluntary self-regulatory framework includes, among other things, a provision for evaluating risk in relation to capital. As noted in the framework, this initiative is considered part of a process, not a single event. As DPG member firms and SEC and CFTC gain insights, they anticipate further refinements to the framework. The “risk in relation to capital” provision of the framework has two parts. First, it suggests a way to estimate market and credit exposures associated with OTC derivatives activities. The market risk approach is similar to the approach used by bank regulators in that it uses internal models; but the credit risk approach is different in that it is based on rating agency information, not on regulatory risk-weights. Second, it advocates an approach for evaluating those risks in relation to capital. According to the DPG framework, capital-at-risk estimates are imperfect measures of potential losses associated with market and credit risks. However, it noted that managers and supervisors can use them to gauge capital adequacy, and the firms have agreed to report their estimates periodically to SEC and CFTC. Although DPG firms’ estimates of capital-at-risk are not intended to be capital standards, the estimates incorporate elements similar to some of those used in the banks’ risk-based capital regulations. The DPG capital-at-risk for market risk is to be generated by the DPG reporting firm’s internal model using the same parameters (10-day price shock, 99 percent confidence interval) required by the bank regulators. DPG, however, rejected the use of a multiplier to link capital-at-risk to capital levels. Moreover, for credit risk, DPG adjusts for historical default ratios as published by the rating agencies. The DPG firms rejected the bank regulators’ method of estimating potential future credit risk because the bank regulators’ method is based on notional/contract amounts, which the DPG firms do not consider to be meaningful measures of risk. DPG firms consider this to be an interim approach for estimating current and potential credit risk. They noted in the framework that they anticipate cooperating with requests by SEC and CFTC to compute potential credit risk using other methodologies. Because the potential for risk of loss beyond the capital-at-risk estimate exists, DPG firms agreed to supplement these estimates with other potential loss estimates resulting from defined stress scenarios. The framework also outlines a common approach to audit and verify technical and performance characteristics because it allows the DPG firms to use internal models that may be unique. DPG member firms developed minimum standards and audit and verification procedures to ensure that performance characteristics of all models used to estimate capital-at-risk for market risk are broadly similar and rigorous. SEC and CFTC have received annual reports from the DPG reporting firms that summarized external auditors’ reviews of these models. However, because no generally accepted criteria for modeling yet exist that would allow an external auditor to give an opinion on the model’s adequacy, the independent accountants filed reports regarding only limited agreed-upon procedures with respect to their reviews of these models. In the second part of the framework’s capital-at-risk component, the DPG firms advocate, for a transitional period, an approach for evaluating market- and credit-risk estimates in relation to capital levels. To evaluate the adequacy of existing capital levels at DPG-member affiliates, the framework advocates an oversight approach that encourages regulators and senior managers to take into account the following factors: the firm’s structure, internal controls, and risk management systems; quality of management; risk profile and credit standing; actual daily loss experience; ability to manage risks as indicated by the firm’s ability to perform and document stress testing; and overall compliance with the framework’s policies and procedures. The DPG firms anticipate that as experience is gained with the overall DPG framework, and depending on the evolution of thinking and policies among regulators internationally, this approach may require further refinement or modification. Concerns remain about using internal models for regulatory purposes (e.g., validating the accuracy of the results). However, SEC and CFTC have been collecting and examining data from broker-dealers’ internal models, via DPG, to gain a better understanding of the manner in which the models operate and the adequacy of capital charges derived from them. Current capital and margin requirements applicable to registered broker-dealers impose substantial costs on the operation of an OTC derivatives business and make it difficult for U.S. securities firms to compete effectively with banks and foreign dealers in the OTC derivatives markets. In December 1997, in order to allow broker-dealers to take better advantage of counterparty netting and to adjust the capital rule to better reflect the risks of OTC derivatives, SEC proposed the creation of a new class of broker-dealers called OTC derivatives dealers. This limited regulatory structure would be available only to entities acting primarily as counterparties in privately negotiated over-the-counter derivatives transactions and would be subject to modified capital, margin, and other regulatory requirements tailored to the OTC derivatives business. For example, under the limited regulatory structure, OTC derivatives dealers would be required to maintain at least $100 million in tentative net capital (i.e., capital before haircuts and undue concentration charges are taken) and at least $20 million in regulatory net capital. Also, OTC derivatives dealers would be exempted from certain margin requirements. SEC believes the proposed minimum of $100 million in tentative net capital is necessary to ensure against excessive leverage and risks other than credit or market risk, all of which are now factored into the current haircuts, and to provide for a cushion of capital against severe market disturbances. Under the proposal, OTC derivatives dealers would be given the option of either taking haircuts, as currently required under SEC’s net capital rule, or using a rule proposed to calculate capital charges for credit risk and using a VAR model to determine capital charges for market risk. SEC’s proposed rule would require that any VAR models meet certain minimum qualitative and quantitative requirements. In calculating capital charges for market risk, OTC derivatives dealers could elect one of two methods. First, OTC derivatives dealers would be able to use the full VAR method to calculate capital charges for market risk exposure for transactions in eligible OTC derivatives instruments and other proprietary positions of the OTC derivatives dealer. Under the full VAR method, a market risk capital charge would be equal to the VAR of its positions multiplied by a factor specified in the proposed rule. Second, an OTC derivatives dealer could use an alternative method of computing the market risk capital charge for equity instruments and OTC options and use VAR for its other proprietary positions. Because OTC derivatives dealers would be required to obtain authorization from SEC before using VAR models, this alternative method would also be used by a firm that does not receive SEC authorization to use a VAR model for equity instruments. In calculating capital charges for credit risk, OTC derivatives dealers electing to apply the proposed rule would compute a two-part charge on a counterparty basis. First, for each counterparty, OTC derivatives dealers would take a capital charge equal to the net replacement value in the account of the counterparty multiplied by 8 percent, and further multiplied by a counterparty factor ranging from 20 to 100 percent based on the counterparty’s rating by at least two nationally recognized statistical rating agencies. The counterparty factors would link the size of the credit risk capital charge to the perceived risk that the counterparty may default. The second part of the credit risk charge would consist of a concentration charge that would apply when the net replacement value in the account of any one counterparty exceeds 25 percent of the OTC derivatives dealer’s tentative net capital and would also be based on the counterparty’s rating by at least two rating agencies. The concentration charge would equal 5 percent of the amount of the net replacement value in excess of 25 percent of the OTC derivatives dealer’s tentative net capital for counterparties that are highly rated and would increase in relation to the OTC derivatives dealer’s exposure to lower rated counterparties. In addition to the OTC derivatives dealers release, in December 1997, SEC proposed amendments to the net capital rule, Rule 15c3-1, regarding the method of computing haircuts applicable to interest rate products. The proposed amendments would treat most types of interest rate products as part of a single portfolio and would recognize various hedges among a portfolio of government securities, investment grade nonconvertible debt securities (or corporate debt securities), certain pass-through mortgage-backed securities, repurchase and reverse repurchase agreements, money market instruments, futures and forward contracts on these debt instruments, and other types of debt-related derivatives. The proposed amendments are intended to better match capital charges with actual market risk hedging practices employed by broker-dealers. As part of its comprehensive regulatory reform efforts to update its oversight of both exchange and off-exchange markets, CFTC published a concept release in May 1998, on issues relating to the OTC derivatives market. The concept release requests comments on whether the regulatory structure applicable to OTC derivatives under CFTC regulations should be changed in light of the growth in the derivatives marketplace since CFTC’s last major regulatory actions involving OTC derivatives in 1993. In 1995, two futures industry SROs, the Chicago Board of Trade (CBOT) and the Chicago Mercantile Exchange (CME), informally proposed to CFTC to base minimum capital requirements on “funds at risk” as opposed to the current “funds required to be segregated.” In their view, the current CFTC net capital requirements (CFTC Rule 1.17) do not fully reflect all of the risks (e.g., foreign customers trading in foreign markets) faced by FCMs’ trading activities and thus impose insufficient capital requirements on FCMs. Funds at risk are generally defined as the initial margin requirements, which are themselves risk-based, imposed by the various exchanges on all open positions held at those exchanges; segregated funds are generally balances that FCMs owe to customers. The CBOT and the CME risk-based capital proposals were positively received by CFTC, which consulted with the SROs on the parameters of the risk model. The SROs’ risk-based capital requirements for their clearing organizations became effective on January 1, 1998. Under the newly adopted risk-based capital requirements, all members of the two SROs are required to maintain adjusted net capital in excess of the greater of (1) the minimum dollar balances of the respective clearing organizations, or (2) 10 percent of domestic and foreign domiciled customer and 4 percent of noncustomer (excluding proprietary) risk maintenance margin/performance bond requirements for all domestic and foreign futures and options on futures contract, or (3) the CFTC/SEC minimum regulatory capital requirements. CME and CBOT believe that these new requirements will correlate FCMs’ capital requirements more closely to the total risks they face in their business. To aid in the adoption of an industrywide risk-based capital standard, CBOT and CME plan to collect and analyze data over several years to determine the effect of the new requirements on overall industry capital levels. NAIC has asked the American Academy of Actuaries to study the possibility of increasing the level of quantification in the interest rate risk component of the life insurance risk-based capital requirements. The reason for changing the interest rate risk component is due to the difficulty in managing interest rate risk for life insurance companies. This difficulty increases as financial products become more complex, and interest rate risk exists in both the assets and liabilities of life insurance companies. The current risk-based capital formula addresses interest rate risk only in the asset side. Changes to the interest rate risk component of the risk-based capital requirements are seen by life insurance regulators and companies as a major change. Other changes that are being made to the risk-based capital formula are considered to be minor modifications. For example, since the initial formula was adopted, changes have been made to the mortgage loan factors, and the treatment of insurers’ investments in certain mutual funds have been given different treatment depending upon what the mutual funds invest in. Regulators we spoke with expected further changes similar to these to continue in the future. In addition to initiatives that would make regulatory capital requirements more sensitive to risks in firms’ activities, a number of other ideas are being explored, primarily in banking, that would take different approaches to simplifying capital regulation. Three of them would use various incentives rather than detailed requirements to deter excessive risk-taking by firms. The final idea in this section is motivated by a desire of some in the industry to keep capital requirements from becoming extremely complex and comes from the recognition that minimum regulatory capital standards and banks’ own internal capital allocation models serve different purposes. In July 1995, the Federal Reserve Board requested public comment on the so-called precommitment approach to market risk capital requirements, which was introduced in a paper by two Federal Reserve officials. It was developed in response to perceived difficulties with the internal models approach to market risk capital. For example, banks’ internal models are not designed to measure risk exposure over the time horizons of regulatory concern and thus may not accurately translate to these intervals. In addition, model-based capital calculations cannot account for the fact that some banks will be in a position to reduce their exposure to losses through investment in superior information systems or other aspects of risk management. Under the precommitment approach, the bank would specify an amount of capital it believed was adequate to cover its risk exposure over a fixed subsequent interval and would commit to manage its trading portfolio to limit losses over the interval to this amount. If the bank’s losses exceeded the precommitment amount, it would face penalties that could range from public disclosure to additional capital requirements or monetary fines. Under this approach, both the commitment and the bank’s risk management system would be subject to review by regulatory authorities. The penalties associated with a breach of the capital commitment are to provide the incentive for banks to commit honestly and to manage risk to stay within the commitment. Some industry analysts considered such an approach to be a major improvement in capital regulation. However, others believe the approach raises a number of issues because of its departure from traditional capital regulation—comparability, interaction with other supervisory policies, enforceability, and the role of penalties. Regarding comparability, on the surface it would seem that precommitted amounts would be comparable across firms because the firms are all being asked about the maximum amount they could lose over the same time interval. However, the amounts are likely to differ because they would still be based on subjective estimates of the quality of internal risk management and differences in firms’ tolerances for risk. Comparability might also be compromised because the cost of capital differs across firms. With regard to interaction with other supervisory policies, bank supervisors are already required to focus on bank internal risk measurement and management systems. Thus, it is not clear that the adoption of the precommitment approach would eliminate supervisory interest in the validation of such systems. With regard to enforcement and the role of penalties, there is a concern not only about the types of penalties that should be used, but also whether it would be counterproductive to enforce them during stressful market conditions. The paper notes that in choosing penalties, it will be important to determine what the goal of penalties is—that is, the degree of incentive they are to provide the bank. Some experts believe that to reliably achieve regulatory objectives, the penalties would need to be bank specific and that the appropriate penalty would depend on a bank’s cost of capital and its individual investment opportunities. However, these factors are not ascertainable by regulators. In addition, recent work by the original designers of the precommitment approach acknowledges that the link between after-the-fact penalties and regulatory capital objectives is tenuous. In the view of a Federal Reserve official, the appropriate penalty for achieving regulatory capital objectives for market risks is bank specific and depends on characteristics that regulators cannot precisely measure. Moreover, an approach that relies on after-the-fact penalties to influence bank behavior implicitly assumes that the bank is forward-looking and takes potential penalties into account when making current capital allocation decisions. This might be a reasonable assumption for healthy banks, but weak banks may not care about future penalties that, in the extreme, might not be enforceable if the bank is insolvent. The New York Clearing House Association (Clearing House) conducted a four-quarter test of the precommitment approach that began in October 1996. The pilot was designed to assist the bank regulators and the participating banks and bank holding companies in evaluating and assessing the usefulness and viability of the approach for regulatory capital purposes. In a comment letter to the Federal Reserve, the Clearing House suggested that the U.S. bank regulators consider adoption of this approach for two reasons: (1) it might constitute a way to effectively establish a relationship between an institution’s calculation of value-at-risk for management purposes and prudent capital requirements for regulatory purposes, and (2) it would result in capital requirements for market risks tailored to the particular circumstances of each institution. There were 10 participants in the pilot—8 U.S. and 2 foreign banking organizations. During the pilot, each participant precommitted the amount of capital it needed to hold against its market risk for four 3-month periods. The pilot was conducted on a consolidated basis in that participants precommitted capital for the consolidated trading operation of the holding company, including bank and Section 20 subsidiaries. After the end of each period, participants reported their results to their primary regulators and provided copies of the reports to the Clearing House. The participants conducted the pilot under the assumption that the penalty would be disclosure, not financial penalties. In its report on the pilot’s results, the Clearing House said that the participants believe (1) the precommitment approach is a viable alternative to the internal models approach for establishing capital adequacy of a trading business for regulatory purposes; and (2) when properly structured and refined, steps should be taken to implement it as an alternative to existing market risk capital standards. Further, the participants believe this approach provides strong incentives for prudent risk management and more efficient allocation of capital as compared to other existing capital standards. The Clearing House believes the pilot contributed to the development and depth of the participants’ thinking about the purpose of capital and about the distinction between economic capital maintained for the benefit of shareholders and minimum regulatory capital. Pilot results showed that the participants’ precommitted capital amounts were less than the market risk regulatory capital requirements. No participant reported a negative change that exceeded its precommitted capital amount. Finally, the participants believe the benefits of the precommitment approach are likely to apply to other risks of trading businesses, such as operational risk, as well. In their view, the approach avoids many of the complications and inefficiencies that are generated when capital requirements are set separately for each category of risk. One high level regulatory official, reflecting the generally held regulatory and industry views, said that the pilot demonstrated that the participants have internal procedures for allocating capital for market and other risks in their portfolios; but it did not, and realistically could not, demonstrate that these internal allocations are sufficiently large to meet regulatory objectives with respect to minimum capital. Even though none of the participants reported losses in excess of their commitments during the pilot, in reality, none of the participants incurred any cumulative loss over any of the four quarters. Hence, no violations would have occurred if no capital had been committed. Another alternative approach to the evolution of bank capital regulation would be one that emphasizes supervision rather than minimum standards. In a 1995 paper, a Federal Reserve official argued that the distinct uses and characteristics of minimum regulatory capital requirements and firm internal capital allocations make it inadvisable to combine them into a single measure. In his view, they are so naturally contradictory that a hybrid would be much less informative than two individual measures. Moreover, he believes an attempt to bring the two constructs closely in line could undermine the useful objectivity of minimum capital and deprive firms of the flexibility they need to determine optimum capital levels. Under this approach, the firm would be accountable for determining its own appropriate level of capital while abiding by sound practices developed in the context of the business. Firms engaged in trading complex instruments would need to apply sophisticated mathematical techniques; those that focus on, for example, small business lending would have to apply different techniques (e.g., traditional credit analysis). The supervisor would monitor the performance of the firm in the firm’s determination of its appropriate capital level. Like the precommitment approach discussed above, this approach also relies on incentives. However, in contrast to the precommitment approach in which penalties are to act as a deterrent to excessive risk taking, the key to the success of this approach would be the supervisor. The supervisor would monitor compliance with minimum requirements as frequently as feasible and then supplement the effectiveness of minimum requirements by ensuring that the firm makes its best efforts to determine an optimum level of capital. In this way, the development and determination of the optimum are best left to the firm, and supervisors would work closely with the firm to ameliorate the situation if they find capital levels declining toward the minimums. The Federal Reserve official also believes this approach would be consistent with the prompt corrective action rules. (See app. I for more information on prompt corrective action.) In their 1997 paper, two Federal Reserve officials noted that a number of different approaches exist that would emphasize disclosure rather than minimum standards. One of these approaches would operate along the same lines as the approach emphasizing supervision discussed above. It would develop a two-pronged capital structure that would separate minimum standards, which would be set by the supervisor, from the optimal capital held by the firm, which should be its own decision. The first prong could be a minimum capital calculation in which the method would be chosen to emphasize comparability across firms; the second prong would be an internal capital calculation in which the bank would have greater freedom to use its own methodology. The bank would publicly disclose the results of both calculations. In the authors’ view, this approach would seek to combine public disclosure and the discipline of the marketplace to ensure that banks had appropriate incentives in the development of these internal calculations. A number of other approaches to capital regulation are being explored, particularly in the banking area, that would also simplify capital requirements. One possible approach discussed by the Federal Reserve officials in their paper is, in their view, motivated by the desire of some in the industry to keep the capital rules from becoming extremely complex and by the recognition that minimum capital standards serve a different purpose from banks’ own internal capital allocation models. This approach would develop a capital framework that would not require ever more complex measures of portfolio risk. The hope in developing such a framework is that a suitable proxy for true economic risk could be found. This proxy would not be intended to be extremely precise, but it would need to roughly capture the bulk of the firm’s exposures. According to the Federal Reserve officials, the key issue for this alternative approach is whether it is possible to achieve this goal. There are two interpretations for this approach. In the first, the aim is for the simple measure of risk to be roughly accurate in that, on average, it produces a measure of risk that is equivalent to what an ideally precise measure would produce. In the second, the goal would be a simple measure of risk that is good enough to determine whether the firm has a dangerously low level of capital. Another approach discussed by the Federal Reserve officials in their paper is one that would base capital regulation on observed measures of volatility, such as earnings volatility. This approach is also motivated by a desire to develop a simple but comprehensive approach to bank capital regulation that would not require the separate specification of each risk. One possibility suggested in the paper would be for minimum capital to equal some multiple of quarterly earnings volatility. Such an approach would require almost no additional calculations by the bank and, in the authors’ view, it would be objective and verifiable; however, they noted a number of drawbacks to such an approach. First, it is not clear that earnings volatility is itself a good proxy for economic risk. Second, because it is a transformation of publicly available information, it does not provide any additional information to the marketplace. Third, it would potentially create incentives for bank behavior aimed at smoothing reported income. The current U.S. bank risk-based capital regulations implement the Basle Accord on risk-based capital. In implementing the Basle Accord each national bank regulator was to make its own regulations at least as strict as the Accord. In the United States, U.S. bank regulators applied it to all banks, rather than just internationally active ones, which were targeted by the Accord. Since 1990, banks and bank holding companies in the United States have been subject to risk-based capital standards. This appendix describes the risk-based capital standards for banks. The standards for bank holding companies are similar. Although U.S. bank risk-based capital guidelines address a number of types of risk, only credit and market risk are explicitly quantified. The quantified risk-based capital standard is defined in terms of a ratio of qualifying capital divided by risk-weighted assets. In addition to the quantified risk-based capital ratio for credit and market risks, bank regulators are required by the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) to monitor other risks, such as interest rate risk and concentration risk. All banks are required to calculate their credit risk for assets, such as loans and securities; and off-balance sheet items, such as derivatives or letters of credit. The credit risk calculation assigns all assets and off-balance sheet items to one of four broad categories of relative riskiness (0, 20, 50, or 100 percent) according to type of borrower/obligor and, where relevant, the nature of any qualifying collateral or guarantee. Off-balance sheet items are converted into credit equivalent amounts. The assets and credit equivalent amount of off-balance sheet items in each category are multiplied by their appropriate risk-weight and then summed to obtain the total risk-weighted assets for the denominator of the credit risk-based capital ratio. Capital, the numerator of the capital ratio, is long-term funding sources for the bank that are specified in the regulations. A bank is to maintain a total risk-based capital ratio (total capital/risk-weighted assets) of at least 8 percent. The credit risk regulation requires the use of two sets of multipliers. One set of multipliers places each off-balance sheet item into one of four categories and converts items in each category into asset equivalents. These conversion factors are multiplied by the face or notional amount of the off-balance sheet items to determine the “credit equivalent” amounts. In addition, for derivatives, these credit equivalent amounts are the value of the bank’s claims on the counterparties plus add-on factors to cover the potential future value of the derivative contracts. Then the other set of multipliers applies the risk-weights to assets and off-balance sheet credit equivalent amounts according to the type of borrower/obligor (and, where relevant, the nature of any qualifying collateral or guarantee). The sum of the risk-weighted assets in all categories is the credit risk-weighted assets for the bank. There are four conversion factors that convert off-balance sheet items into their asset equivalents. The conversions are based on multiplying the conversion factors by the face or notional amounts of the relevant off-balance sheet position. The 100 percent credit conversion factor applies to direct credit substitutes, such as guarantee-type letters of credit, risk participations in bankers acceptances, and asset sales with recourse. The 50 percent credit conversion factor applies to items such as performance bonds, revolving underwriting facilities, or unused commitments with an original maturity exceeding 1 year. The 20 percent credit conversion factor applies to short-term, self-liquidating, trade-related contingencies, including commercial letters of credit. The 0 percent credit conversion factor applies to unused portions of commitments with an original maturity of 1 year or less and unused portions of commitments that can be cancelled at any time. Credit equivalent amounts are also calculated for off-balance sheet derivatives contracts. The credit equivalent amounts on such contracts are the sum of the present positive value (if any) of the contracts plus estimated potential future exposure. Under the capital regulations, the credit equivalent of the potential future exposure of derivatives contracts is estimated by multiplying the notional values of the contracts by specified percentages. The multipliers range from 0 to 15 percent; cover 6 types of derivatives contracts (interest rate, exchange rate, equity, gold, other precious metals, and other commodities); and include maturity categories of 1 year or less, 1 to 5 years, and over 5 years. Although the Basle Accord adopted five risk weight categories, U.S. regulations allow only four risk categories. Category 1 has a zero risk-weight and includes items such as cash, claims on Organization for Economic Cooperation and Development (OECD)central governments and central banks, and claims on U.S. government agencies. The zero weight reflects the lack of credit risk associated with such positions. Category 2 has a 20-percent risk-weight and includes items such as long-term claims on banks in OECD countries, general obligations of OECD governments below the national level, obligations of government-sponsored enterprises, or cash items in the process of collection. Category 3 has a risk-weight of 50 percent and includes items such as certain loans secured by first liens on 1 to 4 family residential real estate and obligations of local governments in OECD countries that depend on revenue flows from projects financed by the debt. Category 4 has a risk-weight of 100 percent and represents the presumed bulk of the assets of commercial banks. It includes, among other things, commercial loans and claims on non-OECD central governments. Before the capital ratio can be calculated, capital must be defined and quantified. There are two qualifying capital components in the risk-based credit risk computation—“core capital” (tier 1) and “supplementary capital” (tier 2). Tier 1 includes common stockholders’ equity; noncumulative perpetual preferred stock (including any related surplus); and minority interests in consolidated subsidiaries, less deductions for certain assets such as goodwill and core deposit intangibles. Tier 1 is stockholder ownership value that cannot be removed if the bank faces financial difficulties. Tier 2 includes the allowance for loan loss reserves, up to a maximum of 1.25 percent of risk-weighted assets; other preferred stock (subject to limitations); and various long-term debt instruments, such as subordinated debt, that provide support to the firm if it is facing financial difficulties because they cannot be readily liquidated by creditors or bond holders prior to maturity. In addition, the regulations limit the amount of tier 2 capital in total capital and the amount and type of qualifying intangible assets that can be recognized for tier 1 capital purposes. The regulation outlines a number of deductions from the capital base. Goodwill and other intangible assets are to be deducted from tier 1 capital as prescribed in the rules. Other deductions from total capital include investments in unconsolidated banking and financial subsidiary companies that are deemed to be capital of the subsidiary, and reciprocal bank holdings of investments in the capital of other banks and financial institutions. With capital and risk-weighted assets defined, the ratio calculation is the sum of tier 1 and tier 2 capital divided by total risk-weighted assets. Table I.1 summarizes the mechanics of converting on- and off-balance sheet assets in each category into the risk-weighted assets and computing the credit risk-based capital ratio. The minimum standard risk-based capital ratio is 8 percent, of which core capital (tier 1) is to be at least 4 percent. Convert all off-balance sheet items into credit equivalent amounts using a conversion factor from the regulation. The asset equivalent of each off-balance sheet item is the notional or face amount of that item multiplied by a conversion factor. The converted amount of each off-balance sheet item is then placed into one of the four risk categories. Sum the balance sheet asset values and the credit equivalent amount of off-balance sheet items in each risk category. Determine the risk-weighted assets in each risk category by multiplying the balance sheet asset values and the credit equivalent amount of off-balance sheet items in each risk category by the appropriate risk-weight percentage for that category found in the regulation. Calculate risk-weighted assets as the sum of the risk-weighted assets across the four risk categories. Calculate the credit risk-based capital ratio: tier 1 capital + tier 2 capital risk-weighted assets Compare the calculated ratio to the standards in the regulation. The risk-based capital regulation requires a bank with a significant market risk exposure to calculate a risk-based capital ratio that takes into account market risk as well as credit risk. The market risk capital regulation applies to positions in an institution’s trading account such as securities and derivatives; and all foreign exchange and commodity positions, wherever they are located in the bank. Market risk exposure is the gross sum of trading assets and liabilities on the bank’s balance sheet. To be considered a significant exposure, this gross exposure must exceed 10 percent of total assets or exceed $1 billion. Credit risk determinations are also made, where necessary, for items included in the market risk calculation. Over-the-counter derivatives and foreign exchange positions outside of the trading account are items subject to both market and credit risk charges. This adjusted risk-based capital ratio requires banks to determine whether positions are subject to market risk capital requirements, credit risk capital requirements, or both. The denominator of the risk-based capital ratio is the sum of credit risk-weighted assets for assets with credit risk and market risk-equivalent assets. To determine market risk-equivalent assets, the bank is required to use its own internal model to calculate its daily value-at-risk (VAR). The numerator of the risk-based capital ratio expands the definition of capital to include a tier 3, which is a special form of subordinated debt as defined in the regulations. The market risk regulation imposes qualitative requirements on the banks and specifies quantitative parameters to be used with the banks’ internal models. Market risk consists of general market and specific risk components. To determine the market risk-equivalent assets, the risk or capital charges must be calculated for both components. Market risk capital charges are based on general market and specific risks. Examples of general market risk factors are interest rate movements and other general price movements. Capital charges for general market risks are to be based on internal models developed by each bank to calculate a VAR estimate, i.e., potential loss that capital will need to absorb. The internal VAR estimate for general market risks is to be based on statistical analyses that determine the probability of a given loss, based on at least 1 year of historical data. This VAR estimate is to be calculated daily using a 99 percent one-tailed confidence interval with a price shock equivalent to a 10-business day movement in rates and prices; i.e., 99 percent of the time the calculated VAR would not be exceeded in a 10-day period. Specific risk arises from factors relating to the characteristics of specific issuers of instruments. Specific risk factors reflect both idiosyncratic price movements of individual securities and “event risk” from incidents, such as defaults or credit downgrades, which are unique to the issuer and not related to market factors. If a bank’s internal model does not capture all aspects of specific risk, an add-on to the capital charge is required for specific risk. Specific risk estimates based on internal models are subject to adjustments based on the precision of the model. The total market risk capital charge is the sum of the capital charges for general market and specific risk. The total market risk capital charge is based on the larger of the previous day’s VAR estimate and the average of the daily VAR estimates for the past 60 days times the multiplier. The multiplier ranges from 3 up to a maximum of 4 depending on the results of backtesting. Market risk-equivalent assets are the total market risk capital charges multiplied by 12.5. The market risk capital ratio augments the definitions of qualifying capital in the credit risk requirement by adding an additional capital component (tier 3). Tier 3 capital is unsecured subordinated debt that is fully paid up, has an original maturity of at least 2 years, and is redeemable before maturity only with approval by the regulator. To be included in the definition of tier 3 capital, the subordinated debt is to include a lock-in clause precluding payment of either interest or principal (even at maturity) if the payment would cause the issuing bank’s risk-based capital ratio to fall or remain below the minimum requirement. Tier 3 capital provides another capital cushion against losses due to market risk. Application of the market risk capital ratio requires the use of a two-part test. The sum of tiers 1, 2, and 3 capital must equal at least 8 percent of total adjusted risk-weighted assets. The tier 3 capital in this sum is only to be allocated to cover market risk. In addition, the sum of tier 2 and tier 3 capital for market risk may not exceed 250 percent of tier 1 capital allocated for market risk. The regulation includes other restrictions on the use of tier 2 and 3 capital. Table I.2 shows the mechanisms by which the risk-based capital ratio is calculated for credit and market risk. Determine whether positions are subject to market risk capital requirements, credit risk capital requirements, or both. For the credit risk assets and off-balance sheet items, calculate the credit risk-weighted assets as described in table I.1. Quantify general market risks using the bank’s VAR model to estimate the volatility of the prices of market risk assets and items using a VAR model. The estimated VAR is the higher of the previous VAR or the average of the daily VAR estimates for the past 60 days multiplied by a factor between 3 and 4 depending on the accuracy of the VAR model. Quantify specific risks using risk add-ons or estimates based on the bank’s internal model, or some combination of both. Determine the total risk-weighted assets for market risk by summing the measures of general market and specific risks and multiply this sum by 12.5. Calculate the total risk-weighted assets for market risk by summing the credit risk-weighted and market risk-equivalent assets. Determine tier 3 capital and the total capital for the numerator. The mix of the capital tiers in the numerator of the combined credit and market risk-based capital ratio is limited by the regulation. Calculate the total risk capital ratio subject to the capital restrictions in step 7. (tier 1 + tier 2 + tier 3 capital) (credit risk-based assets + market risk-equivalent assets) Compare the calculated ratio to the standards in the regulation. The regulation requires the bank’s internal model to address all major market risk categories using factors sufficient to measure market risks in all covered positions. The regulation specifies certain requirements for the bank’s internal model. In developing its internal model, the bank may use any generally accepted measurement technique, such as variance-covariance models, historical simulations, or Monte Carlo simulations. However, the level of sophistication and accuracy of the model must be commensurate with the nature and size of the bank’s covered positions. For regulatory capital purposes, the VAR measures must meet the following quantitative requirements: 1. The VAR measure or maximum likely loss is to be calculated on a daily basis with a 99 percent one-tailed confidence level with a price shock equivalent to a 10-business day holding period. This 10-day shock can be calculated directly or be based on the 1-day VAR figures. 2. The VAR calculation is to be based on historical data of at least 1 year. 3. The VAR calculation is to account for nonlinear price characteristics of options positions and the sensitivity of the market value of the positions to changes in the volatility of the underlying rates or prices. That is, the calculation must take into account the fact that certain financial positions imply minimal risk for certain market price movements and much larger risks for other market price movements. 4. The VAR measures may incorporate quantified empirical correlationswithin and across risk categories, provided that the bank’s process for measuring correlations is sound. 5. Beginning 1 year after adoption of the rules, backtesting will be required and it is to be based on the most recent 250 days of trading. The testing is to be done on a 1-day holding period and a 99 percent one-tailed confidence level. An institution whose internal model does not adequately measure specific risk must continue to calculate standard specific risk capital charges or add-ons to the VAR-based capital charge to determine market risk capital requirements. An institution whose internal model adequately captures specific risk may base its specific risk capital charge on the model’s estimates. Specific risk means the changes in the market value of specific positions due to factors other than broad market movements, including idiosyncratic variations as well as event and default risk. In order to capture specific risk, the internal model is to explain the historic price variation in the portfolio and be sensitive to changes in portfolio concentrations—the extent to which one type of asset dominates the portfolio—requiring additional capital for greater concentrations. The internal model is required to be robust to adverse environments. The model’s ability to capture specific risks is to be validated through backtesting. Institutions with models that are not validated with backtesting are to continue to use specific risk add-ons as defined in the regulations. The risk management system of any bank subject to the market risk requirement is required to meet the following minimum qualitative requirements. It is to have a risk control unit that reports directly to senior management and is independent from business trading units, an internal risk management model that is integrated into daily policies and procedures to identify and conduct appropriate stress tests and backtests of the model,independent annual reviews of its risk measurement and risk management systems. FDICIA was enacted to make fundamental changes in federal oversight of depository institutions in response to the thrift and banking crisis of the 1980s, which resulted in large federal deposit insurance fund losses. Section 305 of FDICIA required, among other things, that bank regulators revise their risk-based capital standards to include concentration of credit risk, risks of nontraditional activities, and interest rate risk. Inadequate management of these risks had created problems for the bank and thrift deposit insurance funds. In response, on December 13, 1994, bank regulators amended risk-based capital standards for depository institutions to “ensure that those standards take adequate account of concentration of credit risk and the risks of nontraditional activities,” which include derivatives activities. Regulators are to consider the risks from nontraditional activities and management’s ability to monitor and control these risks when assessing the adequacy of a bank’s capital. Similarly, institutions identified through the examination process as having exposure to concentration of credit risk or as not adequately managing their concentration of risk are required to hold capital above the regulatory minimums. Because no generally accepted approach exists for identifying and quantifying the magnitude of risk associated with concentrations of credit, bank regulators determined that including a formula-based calculation to quantify the related risk was not feasible. U.S. bank regulators addressed the interest rate risk portion of section 305 through a two-step process. Step one consisted of a final rule issued on August 2, 1995, that amended the capital standards to specify that bank regulators will include in their evaluations of a bank’s capital adequacy an assessment of the exposure to declines in the economic value of the bank’s capital due to changes in interest rates. The final rules specify that examiners will also consider the adequacy of the bank’s internal interest rate risk management. Step one also included a proposed joint policy statement that was issued concurrently with the final rule. This joint policy statement described how bank regulators would measure and assess a bank’s exposure to interest rate risk. Originally, bank regulators intended that step two would be the issuance of a proposed rule based on the August 2, 1995, joint policy statement that would have established an explicit minimum capital requirement for interest rate risk. Subsequently, bank regulators elected not to pursue a standardized measure and explicit capital charge for interest rate risk. According to the bank regulators’ June 26, 1996, joint policy statement on interest rate risk, the decision not to pursue an explicit measure reflects concerns about the burden, accuracy, and complexity of developing a standardized model and the realization that interest rate risk measurement techniques continue to evolve. Nonetheless, bank regulators said they will continue to place significant emphasis on the level of a bank’s interest rate risk exposure and the quality of its risk-management process when they are evaluating its capital adequacy. The regulators concluded that interest rate risks were too difficult for many institutions to quantify, and concentration risk was too difficult to quantify in a manner that could be used in a risk-based capital calculation. Therefore, instead of developing a quantitative standard for each of these risks, the regulators decided that both risks need to be carefully monitored by examiners and that regulators could increase capital requirements for any institution on a case-by-case basis. FDICIA contains several provisions that were intended to collectively improve supervision of federally insured depository institutions. FDICIA’s Prompt Regulatory Action provisions created two new sections in the Federal Deposit Insurance Act—sections 38 and 39—which mandate that regulators establish a two-part regulatory framework to improve safeguards for the deposit insurance fund. Section 38 creates a capital-based framework for bank and thrift oversight that is based on the placement of financial institutions into one of five capital categories. FDICIA requires that banks meet both a risk-based and a leverage requirement. Capital was made the centerpiece of the framework because it represents funds invested by an institution’s owners, such as common and preferred stock, that can be used to absorb unexpected losses before the institution becomes insolvent. Thus, capital was seen as serving a vital role as a buffer between bank losses and the deposit insurance system. Although section 38 does not in any way limit regulators’ ability to take additional supervisory action, it requires federal regulators to take specific actions against banks and thrifts that have capital levels below minimum standards. The specified regulatory actions are made increasingly severe as an institution’s capital drops to lower levels. By focusing on capital, which absorbs losses, and requiring regulators to take actions when capital levels fall below predetermined thresholds, including requiring closure if capital levels become too low, FDICIA was meant to curb failures and deposit insurance losses if regulators had to close an institution. Section 38 of FDICIA requires regulators to establish criteria for classifying depository institutions into the following five capital categories: well-capitalized, adequately capitalized, undercapitalized, significantly undercapitalized, and critically undercapitalized. The section does not place restrictions on institutions that meet or exceed the minimum capital standards—that is, those that are well-capitalized or adequately capitalized—other than prohibiting the institution from paying dividends or management fees that would drop them into the undercapitalized category. The regulators jointly developed the implementing regulations for section 38 and based the criteria for four of the five capital categories on the international risk-based capital calculation and the leverage capital ratio. The fifth category—critically undercapitalized—is based on a tangible equity-to-total assets ratio. The four regulators specifically based the benchmarks for an adequately capitalized institution on the Basle Committee’s risk-based capital requirement, which stipulates that an internationally active bank must have at least 8 percent total risk-based capital and 4 percent tier 1 risk-based capital. The benchmarks are also based on the U.S. leverage capital standard, which generally requires U.S. banks to have tier 1 capital equal to at least 4 percent of total assets. For the definition of a critically undercapitalized institution, the regulators adopted section 38’s requirement of a tangible equity ratio of 2 percent or less. As shown in figure I.1, three capital ratios are used to determine if an institution is well-capitalized, adequately capitalized, undercapitalized, or significantly undercapitalized. A well-capitalized or adequately capitalized institution must meet or exceed all three capital ratios for its capital category. To be deemed undercapitalized or significantly undercapitalized, an institution need fall below only one of the ratios listed for its capital category. Although not shown in the figure, a fourth ratio—tangible equity—is used to categorize an institution as critically undercapitalized.Any institution that has a 2 percent or less tangible equity ratio is considered critically undercapitalized, regardless of its other capital ratios. Figure I.1: Summary of Four Section 38 Capital Categories and Ratio Requirements Tier 1 risk-based capital ratio The leverage ratio can be as low as 3 percent if the institution has a regulator-assigned composite rating of 1. Regulators are to assign a composite rating of 1 only to institutions considered to be sound in almost every respect of operations, condition, and performance. An institution cannot be considered to be well-capitalized if it is subject to a formal regulatory enforcement action that requires the institution to meet and maintain a specific capital level. The Securities and Exchange Commission’s (SEC) uniform net capital rule (15c3-1) and customer protection rule (15c3-3) form the foundation of the securities industry’s financial responsibility framework. The net capital rule focuses on liquidity and is designed to protect securities customers, counterparties, and creditors by requiring that broker-dealers have sufficient liquid resources on hand at all times to satisfy claims promptly. Rule 15c3-3, or the customer protection rule, which complements rule 15c3-1, is designed to ensure that customer property (securities and funds) in the custody of broker-dealers is adequately safeguarded. By law, both of these rules apply to the activities of registered broker-dealers, but not to unregistered affiliates. SEC amended the net capital rule (Rule 15c3-1) in 1975 to establish uniform net capital standards for brokers and dealers registered with SEC under Section 15(b) of the Securities Exchange Act of 1934 (Exchange Act). With few exceptions, all broker-dealers registered with SEC must comply with this liquidity standard. The primary purpose of this rule is to ensure that registered broker-dealers maintain at all times sufficient liquid assets to (1) promptly satisfy their liabilities—the claims of customers, creditors, and other broker-dealers; and (2) to provide a cushion of liquid assets in excess of liabilities to cover potential market, credit, and other risks if they should be required to liquidate. The rule achieves its purpose by prescribing a liquidity test that requires a broker-dealer to maintain the greater of a specified minimum dollar amount or specified percentage of net capital in relation to either aggregate indebtedness (generally all liabilities of the broker-dealer) or customer-related receivables (money owed to the broker-dealer by customers) as computed by the reserve requirements of Rule 15c3-3. The net capital rule thus enhances investor/customer confidence in the financial integrity of broker-dealers and the securities market. The net capital rule applies only to the registered broker-dealer and does not apply to the broker-dealer’s holding company or unregulated subsidiaries or affiliates. To comply with SEC’s net capital rule, broker-dealers must perform two computations: one computation determines the broker-dealer’s net capital (liquid capital), and another computation determines the broker-dealer’s appropriate minimum net capital requirement (base capital requirement). Net capital is defined as U.S. Generally Accepted Accounting Principles (GAAP) equity plus qualified subordinated liabilities and credits less nonallowable assets, certain operational charges (e.g., fail-to-deliver),and prescribed percentages of the market value (otherwise known as haircuts) of securities and commodities that constitute the broker-dealer’s trading and investment positions. See figure II.1 below. Liquid capital available to meet requirements. Greater of $250,000 or 6-2/3 percent of aggregate indebtedness (basic method); or greater of $250,000 or 2 percent of customer-related receivables or 4 percent of customer aggregated funds if the broker dealer is also registered as an FCM under CEA (alternative method) Net capital above requirement. The process of computing a broker-dealer’s regulatory net capital is really a process of separating its liquid and illiquid assets. In computing net capital, under either the basic or alternative method (discussed below), the broker-dealer must first determine its equity in accordance with GAAP. GAAP liabilities deducted from GAAP assets result in GAAP equity. GAAP requires that the broker-dealer mark to market all securities and commodities positions daily, thereby reflecting unrealized gains (which add to equity) and losses (which subtract from equity)—the current market value—and making it difficult to forbear market losses beyond a day. Once GAAP equity is computed, a number of adjustments are made to reflect the estimated value of the broker-dealer if it was liquidated in a hurry. Liabilities that are properly subordinated to the claims of creditors, including customers, are then added back to GAAP equity as well as certain deferred income tax liabilities and accrued liabilities. Assets considered not readily convertible into cash are deducted from GAAP equity. This includes intangible assets (goodwill); fixed assets (furniture, fixtures, and buildings); prepaid items (rent and insurance); and the value of exchange memberships. The broker-dealer also deducts most unsecured receivables, including unsecured customer debits and bridge loans; and charges for delays in processing securities transactions beyond the normal settlement date. These collective additions and subtractions to GAAP equity result in an amount called tentative net capital. Tentative net capital is then reduced by certain percentage deductions, called haircuts, of the current market value of a broker-dealer’s securities and commodities positions and an undue concentration charge, which reflects the risk of a large, concentrated holding in one security, to arrive at the broker-dealer’s net capital. Then, the net capital base requirement (required net capital amount) is subtracted from the net capital amount to determine the amount of excess net capital held by the broker-dealer. A broker-dealer may compute its net capital requirement by one of two methods. The first method, called the basic or aggregate indebtedness method, requires that the net capital of a broker-dealer conducting a general securities business (i.e., a firm that clears securities transactions and carries customer accounts) be equal to the greater of $250,000 or 6-2/3 percent of its aggregate indebtedness. The 6-2/3 percent requirement says a broker-dealer must have at least $1 of net capital for every $15 of its indebtedness (i.e., a leverage constraint). In the broker-dealer’s first year of operation, its net capital must exceed 12.5 percent of its aggregate indebtedness. Most of the smaller broker-dealers typically use the basic method to compute their net capital requirements because of the nature of their business. Typically, smaller broker-dealers either do not hold customer or broker-dealer accounts and therefore need less than the $250,000 required for broker-dealers that carry customer accounts; or they want to be subject to the less stringent requirements of Rule 15c3-3. Under the second method, the so-called alternative method, the broker-dealer is required to have net capital equal to the greater of $250,000 or 2 percent of its customer-related receivables from the reserve calculation of Rule 15c3-3 or, if registered as a futures commission merchant (FCM), 4 percent of the customer funds required to be segregated pursuant to the Commodity Exchange Act (CEA) and the regulations thereunder (less the market value of commodity options purchased by option customers on or subject to the rules of a contract market, each such deduction not to exceed the amount of funds in the customer’s account). When a firm is registered both as a securities broker-dealer with SEC and an FCM with CFTC, known as being “dually-registered,” it must comply with both agencies’ regulations. However, a dually-registered firm is required to meet only the capital standard that would cause it to hold the most capital. SEC offers this method to broker-dealers as a voluntary alternative (with self-regulatory organization approval) to the basic net capital requirement. This method is based on the broker-dealers’ responsibilities to customers rather than aggregate indebtedness. Reversion to the basic method by the broker-dealer requires SEC’s approval. This option (most commonly used by large broker-dealers because it can result in a lower net capital requirement than under the basic method), in conjunction with Rule 15c3-3 (discussed below), is designed to ensure that sufficient liquid capital exists to return all property (assets—funds and securities) to customers, repay all creditors, and have a sufficient amount of capital remaining to pay the administrative costs of a liquidation if the broker-dealer fails. The broker-dealer’s ability to return customer property is addressed by Rule 15c3-3. The repayment of creditors and the payment of the broker-dealer’s liquidation expenses is addressed by the 2 percent of customer-related receivables net capital requirement and the deductions from net worth for illiquid assets and risk in securities and commodities positions. See pages 148-151 for an example of a hypothetical simplified net capital computation under the alternative method. There are some differences between the two methods of computation. For example: The alternative method ties required net capital to customer-related assets (receivables) rather than all liabilities like the basic method. The alternative method requires a broker-dealer to provide a bad debt reserve of 3 percent of its customer-related receivables versus 1 percent under the basic method. Under the alternative method, stock record differences and suspense account items (prospective losses due to recordkeeping problems) must be included in the calculation of net capital after 7 business days versus the 30 calendar days required under the basic method. However, both methods limit a broker-dealer’s ability to increase its customer commitments only to the extent that net capital supports such an increase. Also, the type of securities business a broker-dealer conducts determines its minimum net capital requirements. For example, for broker-dealers engaging in all facets of a securities business (involves clearing securities transactions and holding customer and broker-dealer accounts), the minimum dollar net capital requirement is $250,000; for broker-dealers that generally do not carry customer or broker-dealer accounts (introducing brokers), the minimum dollar amount is $5,000. See pages 152-153 for more detail on the SEC minimum net capital requirements for specialized types of business. In addition to the minimum base net capital requirements, SEC and SROs (such as the National Association of Securities Dealers and the national exchanges) have established “early warning” levels of capital that exceed the broker-dealer’s minimum capital requirement. This advance warning alerts SEC and the SROs to the fact that a broker-dealer is experiencing financial difficulty (i.e., broker-dealer’s net capital is dropping toward its minimum requirement) and allows time for initiation of corrective action. Broker-dealers that violate the early warning levels must immediately notify SEC and their designated SRO and are thereby subject to closer regulatory scrutiny by SEC and the SRO. SROs may also impose additional operating restrictions or warning requirements on their members, which can be more stringent than SEC’s. For example, the New York Stock Exchange’s rule 326 restricts the business activities of member broker-dealers that are approaching financial or operational difficulties. When a broker-dealer’s net capital drops below its minimum net capital requirements, SEC requires the broker-dealer to cease operations immediately and get additional capital to come into capital compliance or liquidate its operations. The early warning notice levels are as follows: Under the basic method, the broker-dealer’s ratio of aggregate indebtedness to net capital is greater than 1,200 percent. Under the alternative method, the broker-dealer’s net capital is less than 5 percent of customer-related receivables or, if an FCM, net capital is less than 6 percent of CEA customer segregated funds. The broker-dealer’s net capital is less than 120 percent of its required minimum dollar net capital. Market participants indicated that prudent broker-dealers maintain capital levels far in excess of their required minimum net capital amount. They told us that the largest broker-dealers typically hold $1 billion or more in excess of their required capital levels because, among other reasons, their counterparties require it for conducting business with them. SEC has delegated to the SROs primary responsibility for enforcing broker-dealer compliance with the net capital and customer protection rules. SEC and the SROs have established a uniform system of reporting by broker-dealers and inspection schedules and procedures to routinely monitor broker-dealers’ compliance with such rules. Registered broker-dealers, depending on their type of business, are required to file either monthly or quarterly reports with their designated SROs. FOCUS (an acronym for Financial and Operational Combined Uniform Single Report (SEC Form X-17A-5)), the report broker-dealers are required to file, contains confidential key financial and operational information of a broker-dealer’s operations. If a broker-dealer has financial or operational difficulties, SEC or the SRO may require it to accelerate its reports filing at any time as specified in Rule 17a-5(a)(2)(iv). FOCUS is an integral part of the SRO’s early warning system and provides the SRO with a substantial amount of information to detect existing or potential financial and operational problems. Additionally, Rule 17a-5 requires broker-dealers to file annual audited financial statements supplemented by an accountant’s report setting forth any material inadequacies. SEC Rule 15c3-3, adopted in 1972, provides regulatory safeguards regarding the custody and use of customer securities and free credit balances (funds) held by broker-dealers. The rule, with limited exceptions,requires compliance by all registered broker-dealers. The purpose of Rule 15c3-3 is to protect customer funds and securities held by the broker-dealer. Rule 15c3-3 has two parts. The first part requires broker-dealers to promptly obtain and maintain the physical possession or control of all fully paid and excess margin customer securities. The second part requires broker-dealers to segregate all customer cash or money obtained from the use of customer property that has not been used to finance transactions of other customers. SEC’s requirement that broker-dealers maintain possession or control of all customer fully paid and excess margin securities substantially limits broker-dealers’ abilities to use customer securities. Rule 15c3-3 requires broker-dealers to determine, each business day, the number of customer fully paid and excess margin securities in their possession or control and the number of fully paid and excess margin securities that are not in the broker-dealer’s possession or control. Should a broker-dealer determine that fewer securities are in its possession or control than is required (a deficit position in security), Rule 15c3-3 requires the broker-dealer to initiate action and specifies time frames by which these securities must be placed in the broker-dealer’s possession or control. For example, for securities that are subject to a bank loan, the broker-dealer must issue a recall instruction within 1 business day of a deficit position determination, and the securities must be returned to the broker-dealer’s possession or control within 2 business days of the recall instruction. Once a broker-dealer obtains possession or control of customer fully paid or excess margin securities, the broker-dealer must thereafter maintain possession or control of those securities. Rule 15c3-3 also specifies where a security must be located to be considered “in possession or control” of the broker-dealer. “Possession” of securities means the securities are physically located at the broker-dealer. “Control” of securities means the securities are located at one of the approved “control” locations discussed below. “Control” locations include a clearing corporation or depository, free of any lien; a Special Omnibus Account in compliance with Federal Reserve System Regulation T with instructions for segregation; a bona fide item of transfer of up to 40 calendar days (longer with written permission from the transfer agent); foreign banks or depositories approved by SEC; a bank (as defined by the Exchange Act) supervised by a federal banking authority, provided the securities are being held free of any lien; in transit between offices of the broker-dealer (for no more than 5 business days) or held by a majority-owned corporate subsidiary of the broker-dealer if the broker-dealer assumes or guarantees all of the subsidiary’s obligations or liabilities; or in any other location designated by SEC (e.g., a mutual fund or its agent in the case of a registered open-ended investment company). The second requirement of Rule 15c3-3 dictates how broker-dealers may use customer cash credit balances and cash obtained from the permitted uses of customer securities, including from the pledging of customer margin securities. Essentially, the customer protection rule restricts the use of customer cash or margin securities to activities directly related to financing customer securities purchases. That is, the broker-dealer may not use customer property as a source of working capital for its operations. The rule requires a broker-dealer to periodically (weekly for most broker-dealers) compute the amount of funds obtained from customers or through the use of customer securities (credits) and compare it to the total amount it has extended to finance customer transactions (debits). If credits exceed debits, the broker-dealer is required to have on deposit in an account for the exclusive benefit of customers at least an equal amount of cash or cash-equivalent securities (e.g., U.S. treasuries). Consequently, the rule serves to protect any required deposit in a secured location from creditors of the broker-dealer in an insolvency. For most broker-dealers, the calculation must be made as of the close of business every Friday, and any required deposit must be made by the following Tuesday morning. If the required deposit is not made by the broker-dealer, the broker-dealer must immediately notify its SRO and SEC by telegram and promptly confirm such notice in writing. Such notice must be given even if a broker-dealer is presently in compliance with the reserve portion of the rule but discovers that it was previously out of compliance due to a computational error or otherwise. If a broker-dealer fails to make a deposit to the special reserve account when required to do so, it is a criminal violation, and the broker-dealer must cease doing business. If the debits exceed the credits, no deposit is required. The haircuts described below are from SEC Rule 15c3-1(c)(2)(vi)(A)-(M). The percentage amount of the haircut varies depending on the type of security, the maturity date, the quality, and the marketability. Generally, the haircut is deducted from the market value of the greater of the long or short position in each security; however, in some cases haircuts apply to the lesser position as well. The haircuts are designed to discount the firm’s own positions to account for adverse market movements and other risks faced by the firms, including liquidity and operational risks. This refers to securities issued (or guaranteed as to principal and interest) by the U.S. or Canadian government or agency. A haircut is applied to aggregate net long or short positions in 4 main categories (and 12 subcategories) of maturity dates ranging from less than 3 months to 25 years or more. The haircuts range from 0 percent for the short-term securities (0-3 months) to 6 percent for securities with later maturities. For the most part, government securities haircuts are also applied to quasi-agency debt securities, such as those issued by the Export-Import Bank, Tennessee Valley Authority, and the Government National Mortgage Association (Ginnie Mae). These are securities that are direct obligations of, or guaranteed as to principal and interest by, a state or any political subdivision thereof as well as agencies and other state and local instrumentalities. Haircut percentages are applied to the market value of the greater of the long or short position according to maturity date. For municipal securities issued with stated maturities of 2 years or less, haircuts range from 0 percent for securities maturing under 30 days to 1 percent for those maturing in 456 days but less than 732 days. For longer term securities with stated maturities of 2 years or longer, haircuts range from 3 percent to 7 percent. These funds are redeemable securities issued by investment companies whose assets consist of cash, securities, or money market instruments. The haircut ranges from 2 percent to 9 percent based upon the types of assets held by the fund. The percentage deductions for highly rated corporate short-term debt instruments (money market instruments) that (1) have a fixed rate of interest or (2) are sold at a discount and that have maturity dates not exceeding 9 months range from 0 percent to 0.5 percent in five maturity categories ranging from less than 30 days to less than 1 year. Bankers acceptances and certificates of deposit guaranteed by a bank and with maturity dates over 1 year have the same haircuts as U.S. government securities. These securities are corporate bonds that cannot be exchanged for a specified amount of another security, (e.g., equity securities), at a stated price. Highly rated bonds are assigned haircuts ranging from 2 percent to 9 percent for maturity dates ranging from less than 1 year to over 25 years. Certain positions in nonconvertible securities can be excluded from the foregoing haircuts if hedged with U.S. government securities. Also included in this category are foreign debt securities for which a ready market exists. For purposes of foreign securities, a ready market is deemed to exist if such securities (1) are issued as a general obligation of a sovereign government; (2) have a fixed maturity date; (3) are not traded flat or in default as to principal or interest; and (4) are highly rated (implicitly or explicitly) by at least two nationally recognized statistical rating organizations, such as Standard & Poor’s and Moody’s Investors Service. For positions hedged with U.S. government securities, haircuts on the hedged positions range from 1.5 percent for maturities of less than 5 years to 3 percent for maturities of 15 years or more. For positions hedged with nonconvertible debt, haircuts on the hedged positions range from 1.75 percent for a maturity of less than 5 years to 3.5 percent for a maturity of 15 years or more. In either case, no haircut is taken on the hedging position (i.e., the U.S. government securities or the nonconvertible debt). The treatment of debt securities that can be converted into equities and have fixed rates of interest and maturity dates is based on the securities’ market value. If the market value is 100 percent or more of the principal amount, the haircut is the same as that applied to “all other securities,” or 15 percent of the market value of the greater of the long or short positions, plus 15 percent of the market value of the lesser position, but only to the extent that this lesser position exceeds 25 percent of the greater position. If the market value is less than the principal amount, the haircut is the same as for nonconvertible debt securities. This stock is cumulative, nonconvertible, highly rated, and ranked prior to all other classes of stock. The stock is not in arrears as to dividends and carries a haircut of 10 percent of the market value of the greater of the long or short position. These commitments are haircut at 30 percent of the market value of the greater of the net long or net short position (minus unrealized profits), unless the class and issue of securities are listed on a national securities exchange or are designated as NASDAQ National Market System Securities. If the securities are listed or designated, the haircut is then 15 percent (unless the security is an initial public offering whereupon the percentage deduction reverts to 30 percent). These securities include corporate equities and certain foreign securities (other than preferred stock discussed above). They are assigned haircuts of 15 percent of the market value of the greater of the long or short positions, plus 15 percent of the market value of the lesser position, but only to the extent that this lesser position exceeds 25 percent of the greater position (i.e., the first 25 percent of the lesser position incurs no haircut). In cases where there are only one or two independent market makers submitting regular quotations in an interdealer quotation system for the securities, the haircut is 40 percent on both the long and short positions. In cases where there are three or more independent market makers submitting regular quotations, the haircut is the same as for the “all other securities” category above. This refers to a situation where a broker-dealer has a securities position for which the market value is more than 10 percent of the broker-dealer’s net capital before haircuts (i.e., “tentative net capital”). For the charge to apply to equities, the market value of the position must exceed the greater of $10,000 or the market value of 500 shares. For debt securities, the provision applies to positions valued over $25,000. The haircut is an extra percentage of the usual haircut applied, and it is applied only to the excess portion of the total position (over 10 percent). The additional haircut for concentrated positions in equity securities is 15 percent. For other securities, it is 50 percent of the normal haircut on the concentrated securities. These are securities for which there is no ready market, and they carry a 100-percent haircut. Such securities have no independent market makers, have no quotations, and are not accepted as collateral for bank loans. The net capital rule also includes deductions for hedged positions, including futures and options contracts. Options to buy and sell securities and commodities are subject to haircuts because their market values change. See Appendix A to Rule 15c3-1 for options contracts and Appendix B to Rule 15c3-1 for relevant haircuts for futures contracts. CFTC generally has jurisdiction over the regulation of futures and options markets, including their relevant haircuts. Since securities broker-dealers hold futures and options positions in their portfolios, SEC incorporates CFTC’s haircuts for commodities futures and options into its net capital rule. CFTC also incorporates SEC’s securities haircuts into its net capital rule (Rule 1.17). Appendix A to SEC Rule 15c3-1 prescribes haircut methodologies for listed and unlisted options. Recently, to better reflect the market risk in broker-dealers’ options positions and to simplify the net capital rule’s treatment of options for capital purposes, SEC adopted a risk-based methodology using theoretical option pricing models to calculate required capital charges (haircuts) for listed options and related hedged positions. A simple, strategy-based methodology, similar to the old haircut methodology, remains for those firms that do not transact enough options business to warrant the expense of using option pricing models. This is the first time SEC has approved the use of modeling techniques for computing regulatory capital charges. The effective date of the new rule was September 1, 1997. Third-party source models (and vendors) approved by a designated examining authority (i.e., self-regulatory organization) are used to perform the actual theoretical gain and loss calculations on the individual portfolios of the broker-dealers. Such approved vendors provide, for a fee, a service by which the broker-dealers may download the results generated by the option pricing models to allow broker-dealers to then compute the required haircut for their individual portfolios. The greatest loss at any one valuation point would be the haircut. At this time, the only approved vendor/model is the Options Clearing Corporation’s Theoretical Intermarket Margining System (TIMS). Specified underlying price movement assumptions designed to provide for the maintenance of capital sufficient to withstand potential adverse market moves are included. The underlying price movement assumptions were established to be consistent with the volatility assumptions currently incorporated into the net capital rule. Specifically, the models calculate the theoretical gains and losses for a portfolio containing proprietary or market maker options positions at 10 equidistant valuation points using specified increases and decreases in the price of the underlying instrument. The greatest loss at any valuation point becomes the haircut for the entire portfolio. A percentage of a position’s gain at any one valuation point is allowed to offset another position’s loss at the same valuation point. For example, options covering the same underlying instrument are afforded a 100-percent offset. Other offsets are permitted between qualified stock baskets and index options, futures, or futures options on the same underlying index. Broker-dealers are permitted to offset 95 percent of gains with losses (i.e., a 5 percent capital charge). In addition, broker-dealers must take certain minimum deductions to address decay and liquidity risk if the option pricing model calculated an insignificant or no capital charge for a portfolio. This minimum charge is generally one-quarter of a point, or $25 per option contract, unless the basic equity option contract covers more than 100 shares. In this case, the charge is proportionately increased. SEC rules also require a deduction of 7.5 percent of the market value for each qualified stock basket of non-high-capitalization diversified indexes. The rules also require 5 percent of the market value for each qualified stock basket of high-capitalization diversified and narrow indexes used to hedge options or futures positions that are subject to the minimum charge. SEC also permits firms with limited options business to use an alternative strategy-based haircut methodology that generally follows the haircut approach in the previous version of Appendix A to the net capital rule. See Table II.1. This rule was designed for firms whose options business would not make it cost effective to use an option pricing model. A similar strategy-based methodology is also employed for broker-dealers that engage in buying and writing unlisted over-the-counter options. See Table II.2. Table II.1: Alternative Strategy-Based Haircut Methodology for Listed Options Adjustments to net worth: listed options only Add market value of option. Add time value of short option position. Appropriate percentage of the current market value of the securities underlying the option security less the out-of-the-money amount, but reduction cannot serve to increase net capital. Minimum haircut is the greater of $250 per 100 share option contract or 50 percent of aforementioned percentage. 50 percent of the current market value of the option. Deduct time value on long call. Take applicable haircut on the short stock position not to exceed the out-of-the-money amount on the call option. Minimum haircut of $25 for each 100 share option contract, but minimum charge need not exceed intrinsic value of the option. Deduct time value on long put. Take applicable haircut on the long stock position not to exceed the out-of-the-money amount on the call option. Minimum haircut of $25 for each 100 share option contract, but minimum charge need not exceed intrinsic value of the option. Add time value of short option. Take applicable haircut on the long stock position reduced by the call’s intrinsic value. The minimum charge here is $25 per each 100 share option contract. Spread: Long put options vs. short put options and long call options vs. short call options. Add net short market value or deduct net long market value of options. Call spread: excess of exercise value of long call over short call. If exercise value of long call is less than or equal to the exercise value of the short call, no haircut is required. Put spread: excess of exercise value of short put over long put. If exercise value of long put is greater than or equal to exercise value of short put, no haircut is required. (Table notes on next page) A listed option is any option traded on a registered national securities exchange or automated facility of a registered national securities association. Uncovered means an option that is written without any corresponding security or option position as protection in seller’s account. A call is an option giving its holder (buyer) the right to demand the purchase of a certain number of shares of stock at a fixed price any time within a specified period. A put is an option giving its holder (seller) the right to demand acceptance of delivery of a certain number of shares of stock at a fixed price any time within a specified period. Short means the investor sells the option. Long means the investor buys the option. Hedge means any combination of long and/or short positions taken in securities, options, or commodities in which one position tends to reduce the risk of the other. A spread is the simultaneous purchase and sale of the same class of options at different prices. 15 percent, if equities, (or appropriate other percentage) of the current market value of the underlying security less any out-of-the-money amount. Minimum haircut of $250 per 100 share option contract. 15 percent, if equities, (or appropriate other percentage) of the current market value of the underlying security less any in-the-money amount. Net capital cannot be increased because of haircut. 5 percent, if equities, (or 1/2 the appropriate other percentage for other securities as set forth in the rule) of the current market value of the underlying security. 15 percent, if equities, (or appropriate other percentage for other securities as set forth in the rule) of the current market value of the underlying security. Limited to allowable asset value of the option. As for securities, the net capital rule imposes a series of deductions from the market values of commodities. The amount of the deductions varies depending on whether the commodities are part of a hedged or spread position; whether the commodities stand alone as a long or short position; and what types of commodities accounts (inventory accounts, customer accounts) are at issue. These haircuts generally conform with similar provisions in CFTC’s net capital rule and are dependent on the margin requirements set by the commodities boards of trade and clearing organizations. See Table II.3. Tables II.4 and II.5 and II.6 provide information for calculating net capital. Table II.4, a trial balance, provides a starting point for our simplified hypothetical example of a broker-dealer’s net capital calculation under SEC’s alternative method. A trial balance is a list of all open accounts in the general ledger and their balances. A general ledger is a collection of all assets, liabilities, capital, revenue, and expense accounts. Accounts are the means by which differing effects on business elements (e.g., revenues) are categorized and collected. In table II.5, we converted the trial balance into a balance sheet of assets, liabilities, and capital. In table II.6, we compute the broker-dealer’s net capital, including haircuts, using information contained in table II.5. The result of the computation shows that the broker-dealer is in capital compliance and has $352.6 million in excess net capital. Furniture and fixtures (net) Notes to financial statements: Furniture & fixtures (net) Mark to market (investment) (1,000,000) Mark to market (trading) Computation of Alternative Net Capital Compliance: Base requirement: broker-dealer’s net capital must be the greater of $250,000 or 2 percent of aggregate customer debits (i.e., customer-related receivables) as computed per Rule 15c3-3’s reserve formula. Aggregate customer debits equal (customer debits - (customer debits x 3%)). In our example, aggregate customer debits equal $38,800,000 ($40,000,000 -($40,000,000 x 3%)). The 3 percent is analogous to the broker-dealer’s loss reserve for the loans made to customers. Our base requirement is $776,000 (2% x $38,800,000). Because the $776,000 is more than the $250,000 minimum dollar requirement, the broker-dealer must hold at least a minimum of $776,000 in net capital. The broker-dealer is in compliance with this requirement because it has $353,400,000 in net capital. Another requirement is that the broker-dealer’s subordinated debt to total debt-equity ratio may generally not exceed 70 percent for 90 days. The ratio is calculated by dividing a broker-dealer’s total net worth into its subordinated debt ($40,000,000/$479,000,000). With a ratio of only 8.35 percent, the broker-dealer meets this requirement. Greater of $250,000 or 6-2/3% of AI Greater of $250,000 or 2% of Rule 15c3-3 Reserve Formula debits 2. Firms that carry customer accounts, receive but do not hold customer funds or securities, and operate under the paragraph (k)(2)(i) exemption of Rule 15c3-3. Greater of $100,000 or 6-2/3% of AI 1. Firms that introduce accounts on a fully disclosed basis to another broker or dealer and do not receive funds or securities. Greater of $5,000 or 6-2/3% of AI 2. Firms that introduce accounts on a fully disclosed basis to another broker or dealer and receive, but do not hold, customer or other broker-dealer securities and do not receive funds. Greater of $50,000 or 6-2/3% of AI 1. Brokers or dealers that trade solely for their own accounts, endorse or write options, or effect more than 10 transactions for their investment account in any 1 calendar year. Greater of $100,000 or 6-2/3% of AI 1. Brokers or dealers transacting a business in redeemable shares of registered investment companies and certain other share accounts. Greater of $100,000 or 6-2/3% of AI or $2,500 per security for securities with a market value greater than $5 per share, and $1,000 per security for securities with a market value of $5 or less with a maximum requirement of $1 million 1. Firms that deal only in Direct Participation Programs (i.e., real estate syndications). Greater of $5,000 or 6-2/3% of AI 2. Firms that do not take customer orders, hold customer funds or securities or execute customer trades, because of the nature of their activities (e.g., mergers and acquisitions). Greater of $5,000 or 6-2/3% of AI 1. Brokers or dealers registered with CFTC. Greater of $250,000 or 4% of customer funds required to be segregated pursuant to the CEA and regulations thereunder (continued) 1. Any firm may elect this method; however, the firm will be subject to the $250,000 minimum net capital requirement. Greater of $250,000 or 2% of Rule 15c3-3 Reserve Formula debits A broker or dealer electing this method to calculate its net capital levels must notify its examining authority in writing and may not thereafter revert to the Aggregate Indebtedness Method (unless approved by SEC.) * The minimum capital requirements opposite the type of broker-dealers are under the Basic (or Aggregate Indebtedness) Method. The bond holdings of insurers are split into seven different risk classifications or categories based on bond quality. Class 1 bonds are those of the highest quality, while Class 6 bonds are those bonds that are in or near default. The seventh bond classification is for U.S. government securities. Each bond classification has a different risk factor by which bond holdings in that category are multiplied. The risk-based capital requirement for a U.S. government security is zero because there is no default risk for those bonds. The risk factors for other bonds range from 0.003 ($3 per $1000 of value) for Class 1 bonds to 0.300 ($300 per $1000 of value) for high risk bonds in Class 6. As the risk gets higher, the risk-based capital requirement increases. In addition, there are other statutory limitations on the amount of junk bonds that insurers are permitted to carry on their books. There is also an adjustment, called the bond size factor, that increases the nominal risk factors for insurers that have less diversification in their bond portfolio, after excluding U.S. government issues and certain U.S. agency issues. For insurers with relatively few different issuers (that is, little diversification), the bond size factor increases the risk-based capital factor by 2.5 times. Only a handful of insurers with at least 1,300 issuers in their bond portfolio can use the nominal factors. The risk-based capital formula treatment of mortgages differs by the type of mortgage and the mortgage status. Mortgages are generally broken down into three main categories—farm, residential, and commercial. These categories are also further subdivided as to whether the mortgage is insured/guaranteed or not. The risk-based capital factors also differ for current mortgages, those 90 days overdue, and those in the process of foreclosure. There is also a company-specific experience adjustment to the risk-based capital factors for farm and commercial mortgages, based on the experience of the insurer relative to the industry as a whole. Beginning in 1997, the risk-based capital calculation for troubled mortgages is made on a mortgage-by-mortgage basis in order to recognize the extent to which the statement value of each of those troubled mortgages has already been marked to market or otherwise written down. In contrast to banks, insurance companies are permitted to hold stocks as investments. Experience data to develop preferred stock factors are not readily available; however, it is believed that preferred stocks are somewhat more likely to default than bonds, and the loss or default would be somewhat higher than that experienced on bonds. Formula factors are equal to bond factors plus 2 percent (but not more than 30 percent). This is consistent with the approach adopted for preferred stock factors for AVR purposes. The factor for unaffiliated common stock is based on studies conducted at two large life insurance companies. Both of these studies indicated that a 30-percent factor is needed to provide capital to cover approximately 95 percent of the greatest losses in common stock value over a 2-year period. This factor assumes capital losses are unrealized and not subject to favorable tax treatment at the time loss in market value occurs. Two other classes of common stock receive a different treatment. Nongovernment money market mutual funds are more like cash than common stock; therefore, the factor used is 0.3 percent, the same factor used for cash. Federal Home Loan Bank stock has characteristics more like a fixed income instrument rather than common stock. A 2.3-percent factor was chosen. Separate accounts are investment pools held separately from all other assets of the insurer. The primary purpose of separate accounts is to allow the insurer to make investments exempt from the usual investment restrictions imposed by state law. Separate accounts are authorized by states to permit insurers to offer customers investment strategies that would not otherwise conform to insurance regulations. Because of the nature of separate accounts, losses cannot exceed the funds held in the separate account and thus are insulated from the general accounts of the insurer. The customer, rather than the insurer, is responsible for all investment gains and losses. Separate accounts are maintained primarily for pension funds and variable life and annuity products. Although separate accounts represent a large segment of the aggregate assets and liabilities of the life insurance industry, they have considerably less of a risk-based capital requirement than other investment assets used to fund general account obligations. Life insurance risk-based capital makes a distinction between company-occupied real estate, real estate acquired by foreclosure, and investment real estate. Furthermore, real estate may be owned directly, in which case it is reported as “real estate,” or it may be owned through a partnership. Partnerships and joint ventures are referred to as “Schedule BA” assets and are discussed separately. Like mortgage risk, the real estate risk for real estate directly owned is calculated separately for each property. There is a charge for the statement value of the property as well as a charge for the amount of encumbrances. Companies that have developed their own risk-based capital factors have used factors ranging from 5 percent to 20 percent. One study indicated real estate volatility is about 60 percent of common stock, suggesting a factor in the range of 18 percent. Assuming some tax effect for losses, a factor of 10 percent was chosen. Foreclosed real estate would carry a somewhat higher risk at 15 percent. The foreclosed real estate factor is lower than the factor for mortgages in foreclosure (20 percent) because mortgages in foreclosure have already been written down when they are moved to the foreclosed real estate category. Because a surplus reduction has already been taken, the factor is lower. Schedule BA on the life insurers’ regulatory financial report (known as the Annual Statement) includes those long-term assets that, because of their peculiar nature, are not included elsewhere on the report. These include assets owned by the insurer through partnership arrangements as well as other unusual assets. In recognition of the diverse nature of Schedule BA assets, the risk-based capital is calculated by assigning different risk factors according to the different type of assets. Assets with underlying characteristics of bonds and preferred stocks rated by the NAIC Securities Valuation Office have different factors according to the Office’s assigned classification. Unrated fixed-income securities are treated the same as Other Schedule BA Assets and assessed a 30-percent charge. Rated surplus and capital notes have the same factors applied as Schedule BA assets with the characteristics of preferred stock. Schedule BA real estate also has a 15-percent factor because of the additional risks inherent in owning real estate through a partnership. The factors used for Schedule BA mortgages are the same as for commercial mortgages. Where it is not possible to determine the risk-based capital classification of an asset reported on Schedule BA, a 30-percent factor is applied. The purpose of the concentration factor is to reflect the additional risk of high concentrations in single exposures (represented by an individual issuer of a security or a holder of a mortgage, etc.). The concentration factor doubles the risk-based capital factor (with a maximum of 30 percent) of the 10 largest asset exposures, excluding various low-risk categories or categories that already have a 30-percent factor. Because the risk-based capital of the assets included in the concentration factor has already been counted once in the basic formula, this factor itself serves only to add in the additional risk-based capital required. The calculation is completed on a consolidated basis; however, the concentration factor is reduced by amounts already included in the concentration factors of subsidiaries to avoid double counting. The factor for cash is 0.3 percent. It is recognized that there is a small risk related to possible insolvency of the bank where cash deposits are held. The 0.3 percent, equivalent to a class 1 bond, reflects the short-term nature of this risk. The short-term investments to be included here are those that are not reflected elsewhere in the formula. Commercial paper, negotiable certificates of deposit, repurchase agreements, collateralized mortgage obligations, mortgage participation certificates, interest only and principal only certificates, and equipment trust certificates, should be included in appropriate bond classifications (class 1 through class 6) and should be excluded from short-term investments. The 0.3-percent factor is equal to the factor for cash. For derivative instruments, the statement value exposure net of collateral (the balance sheet exposure) is included under miscellaneous C-1 risks. Because collars, swaps, forwards, and futures can have statement values that are positive, zero, or negative, the potential exposure to default by the counterparty for these instruments cannot be measured by the statement values and must be calculated. The factors applied to the derivative’s off-balance sheet exposure are the same as those applied to bonds and reflect the insurer’s exposure to loss upon default of the counterparty. Insurance companies often lay off part of their risk by purchasing reinsurance. There is a risk associated with recoverability of amounts from reinsurers. The risk is deemed comparable to that represented by bonds rated as risk classes 1 and 2 and is assigned a factor of 0.5 percent. Some types of reinsurance such as reinsurance with nonauthorized companies, reinsurance among affiliated companies, reinsurance with funds withheld, and reinsurance involving policy loans, are subject to a separate surplus charge. To avoid an overstatement of risk-based capital, the formula gives a 0.5-percent credit for these types of reinsurance. Life insurers establish reserves to cover expected claims costs from their outstanding insurance-in-force. The life insurance risk-based capital factors chosen represent the surplus needed to provide for excess claims over expected claims, both from random fluctuations and from inaccurate pricing, for future levels of claims. For a large number of trials, each insured either lives or dies according to a “roll of the dice” reflecting the probability of death. The present value of the claims generated by this process, less expected claims, will be the amount of surplus needed under that trial. The factors chosen under the formula produce a level of surplus at least as much as needed in 95 percent of the trials. The model was developed for portfolios of 10,000, 100,000, and 1 million lives; and it was found that the surplus needs decreased with larger portfolios, consistent with the law of large numbers. One set of factors is applied to individual and industrial insurance-in-force and another set for group and credit insurance. Amount of insurance-in-force (in dollars) Premium stabilization reserves are funds held by the company in order to stabilize the premium a group policyholder must pay from year to year. Usually experience rating refunds are accumulated in such a reserve so that they can be drawn upon in the event of poor future experience. This reduces the insurer’s risk. For group life and health insurance, 50 percent of premium stabilization reserves held in the Annual Statement as a liability (not as appropriated surplus) are permitted as an offset up to the amount of risk-based capital. Risk-based capital factors for health insurance are applied to medical and disability income premiums and claim reserves with an offset for health premium stabilization reserves. The purpose of the life insurance risk-based capital formula is to estimate the risk-based capital levels required to manage losses that can result from a series of catastrophic financial events. These are the C-0 through C-4 calculations described above. However, chances are remote that all such losses will occur simultaneously. The covariance adjustment states that the combined effect of the C-1, C-2, and C-3 risks are not equal to their sum but are equal to the square root calculation described below. It is statistically assumed that the C-1 risk and C-3 risk are correlated, and the C-2 risk is independent of both. This assumption provides what is considered by NAIC to be a reasonable approximation of the capital requirements needed at any particular level of risks. ACLRBC is 50 percent of the sum of the C-0 plus the C-4 risk-based capital and the square root of the sum of the C-1 and C-3 risk-based capital squared and the C-2 risk-based capital squared. In order to calculate their TAC capital for risk-based capital purposes, insurers are allowed to make several adjustments to their reported total capital. These include adding to total capital their AVR, part of the provision for future dividends, and an adjustment to avoid double counting for some subsidiary amounts. Under the Life Risk-Based Capital Model Act, a comparison of the ACLRBC with the level of TAC determines the level of regulatory attention, if any, applicable to the company. Companies whose TAC is between 2.0 and 2.5 times the ACLRBC are subject to a trend test. The trend test calculates the greater of the decrease in the margin between the current year and the prior year and the average of the past 3 years. It assumes that the decrease could occur again in the coming year. Any company with a trend below 1.9 times ACLRBC would trigger Company Action Level risk-based capital regulatory action. The sensitivity tests provide a “what if” scenario to the calculation of risk-based capital by recalculating ACLRBC or TAC using a specified alternative for a particular factor in the formula. The amounts reported in the sensitivity tests are an actual recalculation of ACLRBC and TAC. If a company does not have any of the assets or liabilities specified by the sensitivity tests, including affiliates, noncontrolled assets, guarantees for affiliates, contingent liabilities, long-term leases, and interest swaps, the amounts reported after the tests are the same ACLRBC and TAC as originally calculated. ED&F Man International, Inc. Goldman, Sachs & Co. Lehman Brothers, Inc. Merrill Lynch & Co., Inc. Morgan Stanley & Co., Inc.Salomon Brothers, Inc. | GAO reviewed: (1) the regulatory views on the purpose of capital and current regulatory requirements; (2) the approaches of some large financial firms to risk measurement and capital allocation; and (3) issues in capital regulation and initiatives being considered for changes to regulatory capital requirements. GAO noted that: (1) capital requirements differ by financial regulator due to differences in the regulators' purpose; (2) historically, regulators based capital regulation on the traditional risks in each financial sector; (3) within the past decade, both the banking and life insurance sectors adopted new capital requirements that are specifically designed to be more sensitive to exposure to multiple risks; (4) securities broker-dealers and futures commission merchants continue to operate under net capital rules that the Securities Exchange Commission and the Commodity Futures Trading Commission use in order to protect customers and other market participants in the financial markets from losses due to firm failures, not from bad investments; (5) unlike regulators, firms analyze their use of capital to help ensure that they can achieve their business objectives; (6) although many large firms GAO spoke with use the results of their risk measurements to set limits on trading activities, some go farther and use them to allocate capital within the firm; (7) these techniques have limitations; however, firms and regulators believe they significantly improve firms' ability to measure and manage their risks; (8) the three principal issues pertaining to regulatory capital requirements that are important when considering possible future changes include: (a) the competitive implications for firms stemming from differences in capital rules of different financial regulators; (b) whether regulatory capital requirements create incentives to manage risks inappropriately; and (c) the administration of regulatory capital rules; and (9) regulatory agencies and self-regulatory organizations are exploring or have proposed a number of initiatives for modifying or changing current capital requirements in the banking, securities, futures, and life insurance sectors. |
DOE has numerous contractor-operated facilities that carry out the programs and missions of the Department. Much of the work conducted at these facilities is unclassified and nonsensitive and can be, and is, openly discussed and shared with researchers and others throughout the world. However, DOE’s facilities also conduct some of the nation’s most sensitive activities, including designing, producing, and maintaining the nation’s nuclear weapons; conducting efforts for other military or national security applications; and performing research and development in advanced technologies for potential defense and commercial applications. Security concerns and problems have existed since these facilities were created. The Los Alamos National Laboratory in New Mexico developed the first nuclear weapons during the Manhattan Project in the 1940s; however, it was also the target of espionage during that decade as the then Soviet Union obtained key nuclear weapons information from the laboratory. In the 1960s, significant amounts of highly enriched uranium—a key nuclear weapons material —was discovered to be missing from a private facility under the jurisdiction of the Atomic Energy Commission, a predecessor to DOE. It is widely believed that in the early 1980s, China obtained information on neutron bomb design from the Lawrence Livermore National Laboratory in California. Most recently, two incidents have occurred at Los Alamos in which laboratory employees are believed to have provided classified information to China. In one situation, a laboratory employee admitted to providing China classified information on a technology used to conduct nuclear weapons development and testing. In the other situation, which occurred earlier this year, DOE disclosed that it had evidence that indicated China obtained information on this nation’s most advanced nuclear warhead and had used that information to develop its own smaller, more deliverable nuclear weapons. A laboratory employee has been fired as a result of recent investigations into how this information was obtained by China; however, no charges have yet been filed. While the recent incidents at Los Alamos have been receiving national attention, these are only the most recent examples of problems with DOE’s security systems. For nearly 20 years, we have issued numerous reports on a wide range of DOE security programs designed to protect nuclear weapons-related and other sensitive information and material. These reports have included nearly 50 recommendations for improving programs for controlling foreign visitor access, protecting classified and sensitive information, maintaining physical security over facilities and property, ensuring the trustworthiness of employees, and accounting for nuclear materials. While DOE has often agreed to take corrective actions, we have found that the implementation has often not been successful and that problems recur over the years. I would like to highlight some of the security problems identified in these reports. Thousands of foreign nationals visit DOE facilities each year, including the three laboratories—Lawrence Livermore National Laboratory in California and the Los Alamos National Laboratory and the Sandia National Laboratories in New Mexico—that are responsible for designing and maintaining the nation’s nuclear weapons. These visits occur to stimulate the exchange of ideas, promote cooperation, and enhance research efforts in unclassified areas and subjects. However, allowing foreign nationals into the weapons laboratories is not without risk, as this allows foreign nationals direct and possibly long-term access to employees with knowledge of nuclear weapons and other sensitive information. Consequently, DOE has had procedures to control these visits as well as other lines of defense—such as access controls and counterintelligence programs—to protect its information and technology from loss to foreign visitors. In 1988, we reported that significant weaknesses exist in DOE’s controls over foreign visitors to these laboratories. First, required background checks were performed for fewer than 10 percent of the visitors from sensitive countries prior to their visit. As a result, visitors with questionable backgrounds—including connections with foreign intelligence services—obtained access to the laboratories without DOE’s knowledge. Second, DOE and the laboratories were not always aware of visits that involved topics, such as isotope separation and inertial confinement fusion, that DOE considers sensitive because they have the potential to enhance nuclear weapons capability, lead to proliferation, or reveal other advanced technologies. Third, internal controls over the foreign visitor program were ineffective. Visits were occurring without authorized approvals, security plans detailing how the visits would be controlled were not prepared, and DOE was not notified of visits. Because DOE was not notified of the visits, it was unaware of the extent of foreign visitors to the laboratories. At that time, DOE acknowledged problems with its controls over foreign visitors and subsequently set out to resolve these problems. Among other things, DOE revised its foreign visitor controls, expanded background check requirements, established an Office of Counterintelligence at DOE headquarters, and created an integrated computer network for obtaining and disseminating data on foreign visitors. However, at the same time the number of foreign visitors continued to grow. Between the period of the late-1980s to the mid-1990s, the annual number of foreign visitors increased from about 3,800 to 6,400 per year—nearly 70 percent —and those from sensitive countries increased from about 500 to over 1,800 per year—more than 250 percent. We again examined the controls over foreign visitors and reported in 1997 that most of the problems with these controls persist. We found that revised procedures for obtaining background checks had not been effectively implemented and that at two facilities, background checks were being conducted on only 5 percent of visitors from all sensitive countries and on less than 2 percent of the visitors from China. We also found that visits were still occurring that may involve sensitive topics without DOE’s knowledge. Moreover, other lines of defense were not working effectively. Security controls over foreign visitors did not preclude them from obtaining access to sensitive information. For example, Los Alamos allowed unescorted after-hours access to controlled areas to preserve what one official described as an open “campus atmosphere.” Evaluations of the controls in areas most frequented by foreign visitors had not been conducted. Additionally, we found that the counterintelligence programs for mitigating the threat posed by foreign visitors needed improvements. These programs lacked comprehensive threat assessments, which are needed to identify the threats against DOE and the facilities most at risk, and lacked performance measures to gauge the effectiveness of these programs in neutralizing or deterring foreign espionage efforts. Without these tools, the counterintelligence programs lacked key data on threats to the facilities and on how well the facilities were protected against these threats. Information security involves protecting classified and/or sensitive information from inappropriate disclosure. We have found problems with information security at the nuclear weapons laboratories that could involve the loss of classified information and/or assist foreign nuclear weapons capability. For example, in February 1991, we reported that the Lawrence Livermore National Laboratory was unable to locate or determine the disposition of over 12,000 secret documents. These documents covered a wide range of topics, including nuclear weapons design. The laboratory conducted a search and located about 2,000 of these documents but did not conduct an assessment of the potential that the documents still missing compromised national security. We also found that DOE had not provided adequate oversight of the laboratory’s classified document control program. Although the laboratory’s classified document controls were evaluated annually, the evaluations were limited in scope and failed to identify that documents were missing. In 1987 and 1989, we reported that DOE had inadequate controls over unclassified but sensitive information that could assist foreign nuclear weapons programs. Specifically, we found that countries—such as China, India, Iraq, and Pakistan—that pose a proliferation or security risk routinely obtain reprocessing and nuclear weapon-related information from DOE. We also found that DOE had transferred to other countries information appearing to meet the definition of sensitive nuclear technology, which requires export controls. Further, we found that DOE placed no restrictions on foreign nationals’ involvement in reprocessing research at colleges and universities. In the 1990s, we continued to raise concerns. In 1991, we reported that DOE and its weapons laboratories were not complying with regulations designed to control the risk of weapons technology or material being transferred to foreign countries having ownership, control, or influence over U.S. companies performing classified work for DOE. We estimated that about 98 percent of the classified contracts awarded at the weapons laboratories during a 30-month period that were subject to such regulations did not fully comply with those regulations. As recently as February of this year, we reported on information security problems in DOE’s Initiatives for Proliferation Prevention with Russia.Under these initiatives, DOE may have provided defense-related information to Russian weapons scientists—an activity that could negatively affect U.S. national security. We reviewed 79 projects funded by DOE under this program and found nine to have dual-use implications—that is, both military and civilian applications—such as improving aircraft protective coating materials, enhancing communication capabilities among Russia’s closed nuclear cities, and improving metals that could be used in military aircraft engines. We note that the Department of Commerce has also recently raised concerns about nuclear-related exports to Russia from at least one DOE facility. Commerce notified Los Alamos in January 1999 that equipment the laboratory sent to nuclear facilities in Russia required export licenses and that the laboratory may be facing civil charges for not obtaining the required licenses. Physical security controls involve the protection, primarily through security personnel and fences, of facilities and property. In 1991, we reported that security personnel were unable to demonstrate basic skills such as the apprehension and arrest of individuals who could represent a security threat. Prior to that report, in 1990, we reported that weaknesses were occurring with security personnel, as some security personnel could not appropriately handcuff, search, or arrest intruders or shoot accurately. For example, we found that at the Los Alamos National Laboratory, 78 percent of the security personnel failed a test of required skills. Of the 54-member guard force, 42 failed to demonstrate adequate skill in using weapons, using a baton, or apprehending a person threatening the facility’s security. Some failed more than one skill test. We also found that many Los Alamos’ training records for security personnel were missing, incomplete, undated, changed, or unsigned. Without accurate and complete training records, DOE could not demonstrate that security personnel are properly trained to protect the facility. Problems we have identified were not only with keeping threats out of the facilities, but also with keeping property in. For example, we reported in 1990 that the Lawrence Livermore National Laboratory could not locate about 16 percent of its inventory of government equipment, including video and photographic equipment as well as computers and computer-related equipment. When we returned in 1991 to revisit this problem, we found that only about 3 percent of the missing equipment had been found; moreover, the laboratory’s accountability controls over the equipment were weaker than in the prior year. We also found that DOE’s oversight of the situation was inadequate and that its property control policies were incomplete. We found similar problems at DOE’s Rocky Flats Plant in 1994 where property worth millions of dollars was missing, such as forklifts and a semi-trailer. Eventually, property worth almost $21 million was written off. Other problems in controlling sensitive equipment have been identified, such as disposing of usable nuclear-related equipment, that could pose a proliferation risk. For example, in 1993, DOE sold 57 different components of nuclear fuel reprocessing equipment and associated design documents, including blueprints, to an Idaho salvage dealer. DOE subsequently determined that the equipment and documents could be useful to a group or country with nuclear material to process, and that the equipment could significantly shorten the time necessary to develop and implement a nuclear materials reprocessing operation. This incident resulted from a lack of vigilance at all levels for the potential impacts of releasing sensitive equipment and information to the public, and DOE conceded that system breakdowns of this type could have severe consequences in other similar situations where the equipment and documents may be extremely sensitive. DOE’s personnel security clearance program is intended to provide assurance that personnel with access to classified material and information are trustworthy. We have found numerous problems in this area, dating back to the early 1980s. In 1987, and again in 1988, we found that DOE headquarters and some field offices were taking too long to conduct security investigations. We found that the delays in investigations lowered productivity, increased costs, and were a security concern. We also found that DOE’s security clearance database was inaccurate. Clearance files at two field offices contained about 4,600 clearances that should have been terminated and over 600 employees at the Los Alamos laboratory had clearance badges, but did not have active clearances listed in the files. In other cases, the files contained inaccurate data, such as incorrect clearance levels and names. We followed DOE’s efforts to remedy these problems, and by 1993, DOE had greatly reduced its backlog of investigations. However, some DOE contractors were not verifying information on prospective employees such as education, personal references, previous employment, and credit and law enforcement records. Material accountability relates to the protection of special nuclear material such as enriched uranium and plutonium. In 1991, we found that DOE facilities were not properly measuring, storing, and verifying quantities of nuclear materials. Without proper accounting for nuclear materials, missing quantities are more difficult to detect. We also found that DOE facilities were not complying with a rule requiring that two people always be present when nuclear material is being accessed or used. This rule is designed to preclude a single individual from having access to and diverting nuclear material without detection. In 1994 and 1995, we reported on DOE’s efforts to develop a nuclear material tracking system for monitoring nuclear materials exported to foreign countries. A nuclear tracking system is important to protect nuclear materials from loss, theft, or diversion. In 1994, we reported that the existing system was not able to track all exported nuclear materials and equipment; moreover, DOE had not adequately planned the replacement system. We recommended activities that we believed were necessary to ensure that the new system would be successful. In 1995, we found that DOE had not implemented our recommendations and had no plans to do so. We also found that the system still had development risks. DOE was not adequately addressing these risks and had no plans to conduct acceptance testing, and as a result of these problems, it had no assurance that the system would ever perform as intended. Our concerns were justified, as 3 months after the new tracking system began operating, the technical committee overseeing this system concluded that it faced a high probability of failure and that the system should not be used. As you can see, Mr. Chairman, our work over the years has identified a wide variety of specific security problems at DOE facilities. While each individual security problem is a concern, when looked at collectively over an extended period of time, a more serious situation becomes apparent that stems from systemic causes. In our view, there are two overall systemic causes of the security problems. First, there has been a longstanding lack of attention and/or priority given to security matters by DOE managers and its contractors. Second, and probably most importantly, there is a serious lack of accountability among DOE and its contractors for their actions. These two causes are interrelated and not easily corrected. The lack of attention and priority given by DOE management and its contractors to security matters can be seen in many areas. One area is its long-term commitment to improving security. For example, in response to our 1988 report on foreign visitors, DOE required more background checks be obtained. However, 6 years later, it granted Los Alamos and Sandia exemptions to this requirement, and as a result, few background checks were conducted at those facilities. Also in response to our 1988 report, DOE brought in FBI personnel to assist its counterintelligence programs. However, the FBI eventually withdrew its personnel in the early 1990s because of resistance within DOE to implementing the measures the FBI staff believed necessary to improve security. We note with interest that in response to the current concerns with foreign visitors and other espionage threats against DOE facilities, the FBI is again being brought in to direct DOE’s counterintelligence program. The lack of attention to security matters can be seen in other ways as well. In 1996, when foreign visitors were coming in increasing numbers to the laboratory, Los Alamos funded only 1.1 staff years for its counterintelligence program. Essentially, one person had to monitor not only thousands of visitors to the laboratory but also monitor over 1,000 visits made by laboratory scientists overseas. This problem was not isolated to Los Alamos; funding for counterintelligence activities at DOE facilities during the mid-1990s could only be considered minimal. Prior to fiscal year 1997, DOE provided no direct funding for counterintelligence programs at its facilities. Consequently, at eight high-risk facilities, counterintelligence program funding was obtained from overhead accounts and totaled only $1.4 million and 15 staff. Resources were inadequate in other areas. In 1992, we reported that safeguard and security plans and vulnerability assessments for many of DOE’s sensitive facilities were almost 2 years overdue because, among other reasons, DOE had not provided sufficient staff to get the job done. These plans and assessments are important in identifying threats to the facilities as well as devising countermeasures to the threats. In our view, not providing sufficient resources to these important activities indicates that security is not a top priority. This problem is not new. We reported in 1980 and again in 1982 that funding for security has low priority and little visibility. Earlier I mentioned missing classified documents at Lawrence Livermore Laboratory. In response to that report, both DOE and laboratory officials showed little concern for the seriousness of the situation and told us that they believed the missing documents were the result of administrative error, such as inaccurate record keeping and not theft. Although DOE is required to conduct an assessment of the missing documents’ potential for compromising national security, at the time of our report DOE did not plan to do this for over 1 year after we reported the documents missing. Similarly, security problems identified by DOE’s own internal security oversight staff often go unresolved, even today. For example, issues related to the inadequate separation of classified and unclassified computer networks were identified at Los Alamos in 1988, 1992, and 1994. This problem was only partially corrected in 1997, as classified information was discovered on Los Alamos’ unclassified computer network in 1998. We found in 1991 that deficiencies DOE identified as early as 1985 at six facilities had not been corrected by 1990 because DOE did not have a systematic method to track corrective actions taken on its own security inspections. The low priority given security matters is underscored by how DOE manages its contractors. DOE’s contract with the University of California for managing its Los Alamos and Lawrence Livermore national laboratories contain specific measures for evaluating the university’s performance. These measures are reviewed annually by DOE and should reflect the most important activities of the contractor. However, none of the 102 measures in the Los Alamos contract or the 86 measures in the Lawrence Livermore contract relate to counterintelligence. We reported in 1997 that DOE had not developed measures for evaluating the laboratories’ counterintelligence activities, and DOE told us it was considering amending its contracts to address this problem. Performance measures for counterintelligence activities are still not in its contracts for these two laboratories. The contracts do contain a related measure, for safeguarding classified documents and materials from unauthorized persons, but this measure represents less than 1 percent of the contractor’s total score. Safeguards and security performance measures in general account for only about 5 percent of the university’s performance evaluations for the two laboratories. The low priority afforded security matters may account for the low rating DOE has just given nuclear weapons facilities in its latest Annual Report on Safeguards and Security. Two weapons laboratories—Los Alamos and Lawrence Livermore—received a rating of “marginal” for 1997 and 1998. In its annual evaluation of Los Alamos’ overall performance, however, DOE rated the laboratory as “excellent” in safeguards and security, even though the laboratory reported 45 classified matter compromises and infractions for the year. The previous 3-year rolling average was 20. DOE explained that the overall excellent score was justified based on Los Alamos’ performance in many different aspects of safeguards and security. For future contracts, a new DOE policy will enable the Department to withhold a laboratory’s full fee for catastrophic events, such as a loss of control over classified material. We recommended as far back as 1990 that DOE should withhold a contractor’s fee for failing to fix security problems on a timely basis. Both laboratories have been managed by the University of California since their inception without recompeting these contracts, making them among the longest-running contracts in the DOE complex. In the final analysis, security problems reflect a lack of accountability. The well-documented history of security lapses in the nuclear weapons complex show that DOE is not holding its contractors accountable for meeting all of its important responsibilities. Furthermore, DOE leadership is not holding its program managers accountable for making sure contractors do their jobs. Achieving accountability in DOE is made more difficult by its complex organizational structure. Past advisory groups and internal DOE studies have often reported on DOE’s complex organizational structure and the problems in accountability that result from unclear chains of command among headquarters, field offices, and contractors. For example The FBI, which examined DOE’s counterintelligence activities in 1997, noted that there is a gap between authority and responsibility, particularly when national interests compete with specialized interests of the academic or corporate management that operate the laboratories. Citing the laboratories’ autonomy granted by DOE, the FBI found that this autonomy has made national guidance, oversight, and accountability of the laboratories’ counterintelligence programs arduous and inefficient. A 1997 report by the Institute for Defense Analyses cited serious flaws in DOE’s organizational structure. Noting long-standing concerns in DOE about how best to define the relationships between field offices and the headquarters program offices that sponsor work, the Institute concluded that “the overall picture that emerges is one of considerable confusion over vertical relationships and the roles of line and staff officials.” As a consequence of DOE’s complex structure, the Institute reported that unclear chains of command led to the weak integration of programs and functions across the Department, and confusion over the difference between line and staff roles. A 1997 DOE internal report stated that “lack of clarity, inconsistency, and variability in the relationship between headquarters management and field organizations has been a longstanding criticism of DOE operations . . . . This is particularly true in situations when several headquarters programs fund activities at laboratories. . . .” DOE’s Laboratory Operations Board also reported in 1997 on DOE’s organizational problems, noting that there were inefficiencies due to DOE’s complicated management structure. The Board recommended that DOE undertake a major effort to rationalize and simplify its headquarters and field management structure to clarify roles and responsibilities. DOE’s complex organization stems from the multiple levels of reporting that exist between contractors, field offices, and headquarters program offices. Further complicating reporting, DOE assigns each laboratory to a field operations office, whose director serves as the contract manager and also prepares the contractor’s annual appraisal. The operations office, however, reports to a separate headquarters office under the Deputy Secretary, not to the program office that supplies the funding. Thus, while the Los Alamos National Laboratory is primarily funded by Defense Programs, it reports to a field manager who reports to another part of the agency. We believe these organizational weaknesses are a major reason why DOE has been unable to develop long-term solutions to the recurring problems reported by advisory groups. Recent events at the Brookhaven National Laboratory in New York, for example, illustrate the consequences of organizational confusion. Former Secretary Pena fired the contractor operating the laboratory when he learned that the contractor breached the community’s trust by failing to ensure it could operate safely. DOE did not have a clear chain of command over environment, safety, and health matters and, as a result, laboratory performance suffered in the absence of DOE accountability. To address problems in DOE’s oversight, the Secretary removed the Chicago Operations Office from the chain of command over Brookhaven, by having the on-site DOE staff report directly to the Secretary’s office. We found, however, that even though the on-site staff was technically reporting directly to the Secretary’s office, the Chicago Operations Office was still managing the contractor on a day-to-day basis, including retaining the responsibility for preparing the laboratory’s annual appraisal. Chicago officials told us that there was considerable confusion regarding the roles of Chicago and on-site DOE staff. As a result, DOE did not fundamentally change how it manages the contractor through its field offices. This concludes my testimony, and I will be happy to answer any questions you may have. Nuclear Fuel Reprocessing And The Problems Of Safeguarding Against The Spread Of Nuclear Weapons (EMD-80-38, Mar. 18, 1980). Safeguards and Security At DOE’s Weapons Facilities Are Still Not Adequate (C-GAO/EMD-82-1, Aug. 20, 1982). Security Concerns at DOE’s Rocky Flats Nuclear Weapons Production Facility (GAO/RCED-85-83, Apr. 22, 1985). Nuclear Nonproliferation: DOE Has Insufficient Control Over Nuclear Technology Exports (GAO/RCED-86-144, May 1, 1986). Nuclear Security: DOE’s Reinvestigation of Employees Has Not Been Timely (GAO/RCED-87-72, Mar. 10, 1987). Nuclear Nonproliferation: Department of Energy Needs Tighter Controls Over Reprocessing Information (GAO/RCED-87-150, Aug. 17, 1987). Nuclear Security: DOE Needs a More Accurate and Efficient Security Clearance Program (GAO/RCED-88-28, Dec. 29, 1987). Nuclear Nonproliferation: Major Weaknesses in Foreign Visitor Controls at Weapons Laboratories (GAO/RCED-89-31, Oct. 11, 1988). Nuclear Security: DOE Actions to Improve the Personnel Clearance Program (GAO/RCED-89-34, Nov. 9, 1988). Nuclear Nonproliferation: Better Controls Needed Over Weapons-Related Information and Technology (GAO/RCED-89-116, June 19, 1989). Nuclear Security: DOE Oversight of Livermore’s Property Management System Is Inadequate (GAO/RCED-90-122, Apr. 18, 1990). Nuclear Safety: Potential Security Weaknesses at Los Alamos and Other DOE Facilities (GAO/RCED-91-12, Oct. 11, 1990). Nuclear Security: Accountability for Livermore’s Secret Classified Documents Is Inadequate (GAO/RCED-91-65, Feb. 8, 1991). Nuclear Nonproliferation: DOE Needs Better Controls to Identify Contractors Having Foreign Interests (GAO/RCED-91-83, Mar. 25, 1991). Nuclear Security: Property Control Problems at DOE’s Livermore Laboratory Continue (GAO/RCED-91-141, May 16, 1991). Nuclear Security: DOE Original Classification Authority Has Been Improperly Delegated (GAO/RCED-91-183, July 5, 1991). Nuclear Security: Safeguards and Security Weaknesses at DOE’s Weapons Facilities (GAO/RCED-92-39, Dec. 13, 1991). Nuclear Security: Weak Internal Controls Hamper Oversight of DOE’s Security Program (GAO/RCED-92-146, June 29, 1992). Nuclear Security: Improving Correction of Security Deficiencies at DOE’s Weapons Facilities (GAO/RCED-93-10, Nov. 16, 1992). Nuclear Security: Safeguards and Security Planning at DOE Facilities Incomplete (GAO/RCED-93-14, Oct. 30, 1992). Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations (GAO/RCED-93-23, May 10, 1993). Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load (GAO/RCED-93-183, Aug. 12, 1993). Nuclear Nonproliferation: U.S. International Nuclear Materials Tracking Capabilities Are Limited (GAO/RCED/AIMD-95-5, Dec. 27, 1994). Department of Energy: Poor Management of Nuclear Materials Tracking System Makes Success Unlikely (GAO/AIMD-95-165, Aug. 3, 1995). Nuclear Nonproliferation: Concerns With the U.S. International Nuclear Materials Tracking System (GAO/T-RCED/AIMD-96-91, Feb. 28, 1996). DOE Security: Information on Foreign Visitors to the Weapons Laboratories (GAO/T-RCED-96-260, Sept. 26, 1996). Department of Energy: DOE Needs to Improve Controls Over Foreign Visitors to Weapons Laboratories (GAO/RCED-97-229, Sept. 25, 1997). Department of Energy: Information on the Distribution of Funds for Counterintelligence Programs and the Resulting Expansion of These Programs (GAO/RCED-97-128R, Apr. 25, 1997). Department of Energy: Problems in DOE’s Foreign Visitor Program Persist (GAO/T-RCED-99-19, Oct. 6, 1998). Department of Energy: DOE Needs To Improve Controls Over Foreign Visitors To Its Weapons Laboratories (GAO/T-RCED-99-28, Oct. 14, 1998). Nuclear Nonproliferation: Concerns With DOE’s Efforts to Reduce the Risks Posed by Russia’s Unemployed Weapons Scientists (GAO/RCED-99-54, Feb. 19, 1999). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed its past work involving security at Department of Energy's (DOE) facilities. GAO noted that: (1) GAO's work has identified security-related problems with controlling foreign visitors, protecting classified and sensitive information, maintaining physical security over facilities and property, ensuring the trustworthiness of employees, and accounting for nuclear materials; (2) these problems include: (a) ineffective controls over foreign visitors to DOE's most sensitive facilities; (b) weaknesses in efforts to control and protect classified and sensitive information; (c) lax physical security controls, such as security personnel and fences, to protect facilities and property; (d) ineffective management of personnel security clearance programs; and (e) weaknesses in DOE's ability to track and control nuclear materials; (3) the recent revelations about espionage bring to light how ingrained security problems are at DOE; (4) although each individual security problem is a concern, when these problems are looked at collectively over time, a more serious situation becomes apparent; (5) while a number of investigations are under way to determine the status of these security problems, GAO has found that DOE has often agreed to take corrective action but the implementation has not been successful and the problems reoccur; (6) there are two overall systemic causes for this situation; (7) DOE managers and contractors have shown a lack of attention and priority to security matters; (8) there is a serious lack of accountability at DOE; (9) efforts to address security problems have languished for years without resolution or repercussions to those organizations responsible; (10) security in today's environment is even more challenging, given the greater openness that now exists at DOE's facilities and the international cooperation associated with some of DOE's research; (11) even when more stringent security measures were in place than there are today, problems have arisen and secrets can be, and were, lost; and (12) consequently, continual vigilance, as well as more sophisticated security strategies, will be needed to meet the threats that exist today. |
VA’s process for deciding veterans’ eligibility for disability compensation begins when a veteran submits a claim to VA. The claim is reviewed at one of VBA’s 56 regional offices where staff members assist the veteran by gathering any additional evidence, such as military and medical records, needed to evaluate the claim. Based on this evidence, and the results of any necessary medical examinations, VBA decides whether the veteran is entitled to compensation and, if so, how much. VBA assigns a rating of 0 to 100 percent disability in increments of 10 percentage points depending on the severity of the disability. This rating percentage then determines the monthly payment amount the veteran will receive. According to VA data, in many cases (74 percent), the veteran submitting a claim either is already a beneficiary but is seeking increased compensation, or the veteran was denied benefits previously and is claiming them again. In fiscal year 2015, VBA decided 1.4 million compensation claims and had an inventory of 363,000 claims at the end of the fiscal year. As previously noted, in fiscal year 2015, VA paid about $63.7 billion in disability compensation to about 4.1 million veterans. A veteran dissatisfied with VBA’s initial claim decision can generally appeal within one year from the date of VBA’s notification letter to the veteran. According to the Board, veterans appeal most often because they believe VBA: (1) incorrectly denied them compensation for service- connected disabilities, or (2) under-rated their service-connected disabilities. An appeal begins with the veteran filing a Notice of Disagreement (NOD). VBA then re-examines the case and generally issues a Statement of the Case (SOC) that represents its decision. A veteran who is or remains dissatisfied with VBA’s decision can file an appeal with the Board. In filing that appeal, the veteran could indicate whether they would like a Board hearing. VBA prepares the claim file for Board review and certifies it as ready for review. If the veteran requests a hearing so they can present new evidence or arguments, the Board will generally hold a hearing either by video conference or at a local VBA regional office. The Board reports to the Office of the Secretary of Veterans Affairs, and is independent of VBA. The Board’s members, also known as Veterans Law Judges (VLJ), decide appeals and are supported by attorneys and administrative staff. After the appeal is docketed at the Board, a VLJ or panel of VLJs reviews the evidence and either (1) grants the claimed benefit, (2) denies the benefit, or (3) returns (remands) the claim to VBA for additional work on one or more issues pertinent to the claim and a new decision. According to VA, the Board remands an appeal to VBA in cases where consideration of new evidence, clarification of evidence, correction of procedural defect, or any other action it deems is essential to achieve a proper decision. If the veteran is unsatisfied with the Board’s final decision, the veteran can continue an appeal beyond VA to federal court. Such an appeal begins with the U.S. Court of Appeals for Veterans Claims, then may go to the U.S. Court of Appeals for the Federal Circuit, and finally to the U.S. Supreme Court. See figure 1 for a representation of the appeals process for VA disability compensation benefit decisions. According to VA officials, the number of appeals filed has increased steadily as has the length of time needed for the agency to make a final decision. At the end of fiscal year 2015, according to VA data, VA had over 427,000 pending appeals, approximately 81,000 of which were at the Board. While the percentage of pending appeals awaiting decisions from the Board was less than a quarter of all pending appeals, the fiscal year 2015 inventory was almost double the 41,000 pending at the end of fiscal year 2011. About 20 percent of this growth occurred from fiscal year 2014 through 2015. According to Board data, timeliness has worsened since fiscal year 2011 as well. From fiscal years 2011 through 2015, the average amount of time needed for the Board to make a final decision once the appeal is docketed increased from 240 to 270 days. In addition, the proportion of cases taking the longest to resolve (from when the Board receives the certified appeal to them making a final decision)—over 600 days— increased from 10 percent in fiscal year 2011 to 14 percent in fiscal year 2015 (see fig. 2). Given that the median time for the Board to decide an appeal was 145 days in fiscal year 2015 (compared to an average of 270 days), these data suggest that a relatively small number of appeals is driving up the Board’s reported average processing times. To illustrate, VA officials noted one case where a veteran appealed 27 times over the course of 25 years before the original appeal was concluded. VA has identified three broad approaches for addressing factors that it identified as having contributed to increased appeal inventories and reduced timeliness of appeals decisions, and has already taken action on all three fronts. Citing staffing levels that have not kept pace with workloads, VA secured additional Board staff for fiscal year 2017, and analyzed options for another hiring surge in fiscal 2018. Concerned that its appeals process contributes to delays in appeals decisions—because new evidence may be submitted at any juncture and because VA may be continually required to develop or obtain additional evidence—VA developed a legislative proposal for streamlining its appeals process, including new appeals options designed to accelerate decision-making. Finally, VA has put forth plans to modernize its current, outdated, and inefficient computer system. VA has proposed increasing staff at the Board, as well as VBA, to manage its increasing inventory of appeals and to address related declines in the timeliness of appeals resolutions. VA officials stated that there is a direct and proportional correlation between the number of employees and the number of final appeals decisions, and that Board workloads especially have increased faster than the number of employees staffed to the Board. Specifically, officials have concluded that staff resources within the Board have not been sufficient to adjudicate the increasing number of appeals, ultimately lengthening appeals resolution times. According to VA, in fiscal year 2015 increases in staff (VLJs, attorneys, and support staff), as represented by full-time equivalents (FTEs), allowed the Board to make the highest number of decisions in nearly 30 years. However, despite Board staff increasing by 21 percent from fiscal years 2011 through 2015, officials said that this increase was not sufficient to address the growing inventory of pending appeals, which doubled during the same time period (see fig. 3). Although the increase in Board staff brought about a record number of appeals decisions in fiscal year 2015, according to VA data we reviewed, each appeal took an average of about 3 months (97 days) longer to reach a final decision than in fiscal year 2012. Similarly, in fiscal year 2015 one Board FTE produced an average of 86 appeals decisions, down from 91 completed per FTE in fiscal year 2011. Growing workloads and the increased complexity of cases, according to Board officials, have contributed to these longer appeal resolution times. More specifically, officials said that claims have become more complicated due to not only the number and complexity of injuries and illnesses, but also to advances in medicine that have improved survival rates from catastrophic injuries, experienced by today’s veterans. VA officials estimated that if the number of FTEs and number of appeals decided per FTE stays steady or decreases, appeals resolution times will continue to lengthen. Specifically, as of October 2016, VA projected that if nothing else changes, and if the number of FTEs hold steady at the fiscal year 2017 number (922 FTEs for the Board and 1,495 for VBA), the inventory of appeals could exceed 1 million in fiscal year 2026, which would mean that veterans would wait an average of 8.5 years for a final appeals decision. In light of this assessment, VA concluded that increasing the number of FTEs at the Board is a key step in mitigating the current and future pending inventory of appeals and ultimately improving appeals decision timeliness. In 2016, VA set a goal to decide the vast majority (90 percent) of appeals (including both those reviewed by VBA and the Board) within 1 year by 2021. As an initial step toward this goal, VA requested and received a funding amount that the agency asserted would allow it to fund an additional 242 FTEs for the Board in fiscal year 2017 (a 36 percent increase over the 680 FTEs funded in fiscal year 2016) for a total of 922 FTEs. VA also concluded, however, that this increase in staff will not be enough to reduce its appeals workload and decrease appeals processing time. Therefore, VA estimated the need for a subsequent hiring surge of up to 1,458 FTEs beginning in fiscal year 2018 to reduce the current pending appeals inventory. To understand the need for and implications of a future hiring surge, VA modeled different staffing scenarios. Initially, VA compared how increasing staff in combination with and without proposed changes to the appeals process would achieve inventory reductions, and at what cost. VA determined that by combining staff increases with a new process, it could clear pending appeals faster and at a lower cost than if it hired additional staff under the current process. In response to congressional inquiries, in September 2016 VA also modeled the cost and impact on appeals inventories of four surge options beginning in fiscal year 2018 (in addition to planned hiring in fiscal year 2017). VA estimated, for example, that projected pending appeals in fiscal year 2017 (535,726) would be cleared in 10 years under option 2, compared to a 60 percent reduction over the same time period if there were no hiring surge. See table 1 for a comparison of the four options. VA has proposed changes to its appeals process to address causes of delays in resolving appeals. The key challenge VA identified was the open-ended nature of its disability appeals process, whereby a veteran can submit additional evidence numerous times at any point during the VA appeals process, which can cause another cycle of re-adjudication. Specifically, when a veteran submits additional pertinent evidence after VA’s initial decision on their claim, VA is generally required to review the evidence, develop any other needed evidence, and issue another decision. This is the case regardless of whether the veteran submits the additional evidence to VBA or to the Board and, for appeals pending before the Board, the submission of additional evidence may result in a remand to VBA for further development. VA reported that in fiscal year 2015, the Board remanded about 46 percent of appeals to VBA for additional development. Of those remanded appeals, which may involve more than one issue, VA reported that about 60 percent of the reasons those appeals were returned to VBA were due to the open record that allows veterans to introduce new evidence at any point during the appeal. VA reported that in fiscal year 2015, it took VBA an additional 255 days on average to complete remand development and for the appeal to be re-docketed at the Board. VA also reported that in fiscal year 2015 it took the Board an average of 244 additional days to complete its subsequent review of the returned remands and decide the appeal. According to VA, this re-adjudication can occur multiple times and can add years to the time needed to reach a final decision on an appeal. Board and VSO officials also identified factors within VBA’s initial claim process—and outside of the Board’s control—that cause delays in veterans receiving final decisions on their appeal. Specifically, According to Board and VSO officials, VBA’s decision notification letters are unclear and confusing. In particular, the officials stated that these letters do not adequately explain why claims were denied and do not clearly identify the evidence a veteran needs to provide to fully support a claim on appeal. As a result, some veterans may appeal unnecessarily, or they may appeal without providing the evidence needed to support their claims. VSO officials we interviewed said that some delays are attributable to errors in VBA’s initial decisions. They suggested that errors may have occurred because VBA rushed some decisions in its initiative to reduce its backlog of claims pending more than 125 days. Such errors can lead to Board remands and VBA re-work. For example, the Board may remand an appeal because VBA failed to meet the “duty to assist” responsibilities to a veteran. According to the Board, 41 percent of the reasons for the remands in fiscal year 2015 were due to a VBA error. Board and VSO officials also cited delays in VBA’s transmittal of appeals to the Board as a possible cause for the delays in Board decisions. When a veteran files an application with VBA to appeal to the Board, VBA prepares the case file for transfer to the Board, certifies that the case file is complete and ready for Board review, and transmits the file to the Board. According to VA data on appeals decisions made by the Board in fiscal year 2015, it took on average 537 days to process the appeal from receipt to certification. Docketing appeals that had been certified to the Board added an average of 222 days to processing times for appeals decisions made in fiscal year 2015. VSOs (two of the four we interviewed) told us that they noticed these delays occurring as VA’s focus shifted to clearing the compensation benefit claims backlog. To address process-related challenges, VA’s approach has been to develop a proposal to streamline the appeals process and to ask the Congress to make changes in the laws governing the process. In April 2016, VA issued a draft summary of a proposed streamlined appeals process that reflected collaboration with its stakeholders. This summary was accompanied by draft legislation for the Congress’ consideration. If enacted into law, the draft legislation would make process changes that VA identified as needed to streamline the appeals process. According to VA, key to the proposed process changes would be replacing the current appeals process, which begins in VBA, with a process giving a veteran four options—two in VBA and two in the Board. As presented in VA’s framework, these options would be: Ask VBA to review its initial decision based on the same evidence. Under this option, the veteran would not be able to submit new evidence or request a VBA hearing, and would not be subject to VA’s “duty to assist” requirement. A VBA official (at a level higher than the official who made the initial decision) would review the record supporting the initial decision, and issue a new decision. The veteran could file a “supplemental claim” with VBA, asking VBA to review its initial decision, while providing additional evidence. Under this option, the veteran could also request a VBA hearing. Another VBA official (at the same level as the original VBA decision-maker) would review the revised record, including the additional evidence from the veteran, and issue a new decision. The veteran could file a Notice of Disagreement directly with the Board, bypassing a VBA review. The veteran would have two options with the Board: Ask the Board to review only the existing record without a hearing and then issue a decision. Alternatively, ask the Board to review additional evidence, conduct a hearing before issuing its decision, or both. See figure 4 for a representation of the options in VA’s proposed simplified appeals process. VA officials anticipate that the proposed appeals process described above will expedite appeals in a number of ways, most notably: For those appeals where no additional evidence is submitted, and no formal hearing is conducted (indicated as “VBA conducts local higher- level review” and “Board reviews record without new evidence or a hearing” in figure 4), the re-review of the original record could expedite a final appeals decision. In addition, VA’s “duty to assist” requirements would only apply to VBA for initial and supplemental claims. Unlike the current process, in which the Board may remand appeals to VBA to consider new evidence, the Board would only remand appeals under the new process in cases in which the Board found that VBA failed, in its initial or supplemental claim processing, to meet VA’s “duty to assist” the veteran. VA estimates that, once the new process is fully implemented, remands will steadily decrease and eventually occur in as few as 5 percent of appeals. When the veteran appeals directly to the Board, VBA would no longer be required to review the record (including any additional evidence), prepare statements of its findings (i.e., prepare SOCs or SSOCs), and certify appeals as ready for Board review. VA has estimated that as a result of these process changes—in combination with increased FTEs—the Board could complete cases faster, deciding many more appeals per FTE in fiscal year 2018 compared to fiscal year 2015. More specifically, VA estimated that the Board could complete an average of 180 appeals decisions per FTE without a hearing and 130 with a hearing, compared to the average of 86 total decisions per FTE in fiscal year 2015. We discuss VA’s estimates in more detail later in this report. While VA’s proposal reflects VA’s intent to expedite appeals resolutions, it also contains various protections for veterans that are intended to address stakeholders’ concerns about fairness. Notably, such protections include the following: In contrast to the “one size fits all” process, the proposed reform allows the veteran to choose an option that best fits the circumstances of a veteran’s claim. As shown in figure 2 above, a veteran could choose to have VBA review the initial decision or they could appeal directly to the Board. Also, the veteran would have the option to have either VBA or the Board review the existing record, without having to submit new evidence and/or request a formal hearing. VA expects that these options could help the veteran obtain a faster decision from VBA or the Board. Per VA’s framework, under the new process, the veteran would have up to 1 year from VBA’s initial decision to choose an option. Further, if the veteran is unsuccessful in one appeal option, the veteran could, within 1 year, choose another option. However, according to VA, an appeal for a higher-level review by VBA without new evidence cannot directly follow a Board decision. VBA would be required to provide more information in letters notifying veterans of decisions involving a denial of benefits, which could help veterans make more informed decisions on whether to appeal, which option to pursue, and what additional evidence (if any) to provide. The inclusion of additions to such notifications in VA’s proposed legislation addresses stakeholders’ concerns that veterans did not have enough information to decide whether they should appeal, or what additional evidence they needed to provide, thereby resulting in unnecessary appeals or delays in appeals. A veteran who is not fully satisfied with the result of any lane would have 1 year to seek further review, while preserving an effective date for benefits based on the date the veteran filed the original claim with VBA. This would help ensure a veteran is not penalized for pursuing an appeal to the Board. For example, under VA’s proposal, a veteran denied benefits by the Board could choose to have VBA conduct another review, by filing a supplemental claim with additional evidence. In contrast, under current law, if a veteran appeals to the Board and is denied (and does not appeal to a federal court), the veteran must generally reopen the claim, or start over, by filing another claim with VBA. If the veteran is subsequently granted benefits, the benefits would generally be awarded from the date on which the new (not original) claim was filed, which could result in the veteran not receiving retroactive compensation payments. VA has plans to modernize its current IT system, which it determined is antiquated and a source of delays in processing appeals. VA currently uses the Veterans Appeals Control and Locator System (VACOLS) to track and manage its appeals workload. VA identified a number of reasons why it believes VACOLS should be replaced, including that: The system is based on outdated technology dating from the 1990s that VA determined would be difficult to modify to meet the changing needs of VA. VA designed VACOLS around a paper-based VA claims process and as a result, VACOLS does not adequately support a fully electronic environment. According to VA, although VACOLS has been patched to some extent to handle paperless appeals, the Board relies on paper briefs to help manage its appeals workflow. VACOLS’s lack of automation, integration with other VA systems, and error checks results in mistakes and lost productivity. According to VA, individual employees spend a significant amount of time correcting data entry errors that would be avoided if cases were automatically transferred to the Board. For instance, they said that after cases are transferred to the Board, a team of employees must manually review and correct most incoming cases due to issues with labeling, mismatched dates, and missing files. Via an internal study, VA determined that up to 88 percent of cases transferred to the Board had such errors. Additionally, VA notes that data entry errors can result in instances where paperless cases are mislabeled as paper- based. These cases will not show up as certified in VACOLS and the Board will erroneously wait for a paper case that will never arrive. VACOLS is central to appeals processing, thus a system outage would halt the processing of appeals across VA, either paper or electronic, until VACOLS is repaired, according to VA. VA expects its VACOLS replacement to improve the efficiency of its appeals decisions. Its planned replacement—called Caseflow—is intended to address the limitations of VACOLS and better support processing appeals in a paperless environment. According to VA, Caseflow is being developed in an agile process in which new functions are added to the system as they are completed. In fiscal year 2016, VA developed two initial deliverables. According to VA, the first is intended to automate and introduce consistency to the process of transferring appeals to the Board. The second introduced the ability for staff to access documentation from the Veterans Benefits Management System (VBMS)—VA’s system for processing claims—which VA believes will eventually allow users to review appeals more efficiently. As of February 2017, VA officials also noted the agency is in the process of developing additional components, including document review software for VLJs and attorneys, and a component to better track appeals that are remanded to VBA. According to officials, VA’s longer term plans include a broad roadmap for continuously adding improvements to Caseflow. For instance, VA has plans to build into the system the capability to generate performance metrics, using a component called Caseflow Dashboard. VA states that the dashboard will be able to draw on various VA data systems and provide information on bottlenecks in the appeals process, quantify improvements in the appeals process—including those attributable to improved IT systems—and track the reasons for and number of remands. While Caseflow improvements are being made, VA reported it plans to maintain VACOLS as a redundant resource until the new system is fully complete, at which point VACOLS will be retired. VA acted consistently with sound planning practices in determining its need for additional staff, but it did not fully consider risks and uncertainties in its approach. Sound practices for effective planning suggest that agencies should consider alternative solutions to a problem; assess the risk of unintended consequences; and use data to analyze the problem, including unknowns. Consistent with these concepts and more specific sound workforce planning practices, VA considered various hiring options, such as hiring staff under the current versus VA proposed process, and modeling appeals inventories under four hiring surge options. VA considered a number of factors when comparing the four hiring options including historical data on the volume and complexity of appeals, estimates of future growth in appeals, and the productivity of employees in estimating the number of Board staff needed to meet its timeliness goals. For instance, the Board reviewed past data on the productivity of new staff—which is generally lower for a period of time until individuals acclimate to their jobs—and factored this into the modeling assumptions used to project the number of Board staff needed. More specifically, sound workforce planning practices suggest that agencies identify the resources needed to manage the risks of implementing new processes and conduct scenario planning to determine these needs. While VA considered a number of factors when analyzing hiring options, it initially made many assumptions using a single set of estimates instead of using a sensitivity analysis to consider a range of estimates. These assumptions could have significant implications for how accurately VA identifies needed resources. For example, in its scenario analysis VA assumed: (1) that an average of 50 percent of those veterans appealing will refile their appeal and go through two of the four appeals process options before being satisfied; and (2) that the Board will be able to decide 130 appeals per FTE, and do so within 3 years (1,095 days), for appeals with hearings and decide 180 appeals per FTE within 1 year for appeals without hearings. Because the Board did not consider alternate sets of assumptions, VA does not know the potential effect that variations in these key variables could have on staffing needs. In response to discussions with us on its scenario analyses, VA recently conducted further analyses using alternative estimates for key factors, although the agency’s analyses fell short of the previously discussed sound practices for estimating outcomes based on assumptions. Specifically, VA calculated the effect on appeals inventories and timeliness if VA decided 20 percent fewer appeals, if VA decided more claims and thus had more appeals than expected, and if the breakdown of options that veterans selected for their appeals review is different than the 50/50 split VA projected. The 20 percent reduction in productivity alone could add 2.5 years to VA’s estimate of how long it would take to clear the appeals inventory under hiring surge option 4. However, VA ran a sensitivity analysis for only one of the four hiring surge options and did not analyze the compounded effect of different assumptions together. By not comprehensively conducting sensitivity analyses, VA is hampered in its ability to anticipate and plan for different contingencies, and risks being caught off guard and potentially hiring an inappropriate number of staff. Hiring too few staff could result in it taking longer to reduce the inventory of pending appeals, while hiring too many staff could result in higher expenditures than needed and exacerbate other challenges, such as ensuring sufficient office space, training, and other supports for newly hired staff, as discussed below. VA has acknowledged that some of its assumptions, and thus projections, are based on unknowns and that it will need to continuously rerun the models with updated data. VA also identified strategies and resources needed for recruiting, hiring, and training staff in fiscal year 2017; however, aspects of VA’s workforce planning fall short of sound workforce planning practices that suggest having timely written plans with a systematic approach and detailed steps, time frames, and mitigation strategies to help identify where resources and investments should be targeted. As noted below, VA has identified strategies and taken some positive steps related to recruiting, hiring, and training staff in fiscal year 2017, although these plans sometimes lacked certain details specifically covered in sound workforce planning principles in time to inform ongoing efforts. Recruitment and Hiring: Consistent with sound workforce planning practices, officials have worked to develop a center for excellence in hiring to coordinate workforce planning and develop strategies for recruiting and hiring staff quickly. However, the center was established in the last quarter of fiscal year 2016 to support hiring beginning early in fiscal year 2017. To date, the Board developed a project to recruit recent law school graduates and alumni in fiscal year 2017, according to Board officials. It also has formed a committee of over 90 volunteers to develop recruitment materials, identify opportunities, and make contact with law schools; developed a PowerPoint presentation for the visits; and conducted a few initial presentations at law schools. However, the agency had not yet worked out specific goals such as the number of presentations or resulting applications, average time taken to recruit, and skills needed in recruits, or identified metrics (other than hiring goals) against which it would measure the effectiveness of the recruitment efforts. Also consistent with sound workforce planning practices, Board officials told us that they considered lessons learned from a 2013 hiring surge, although the agency did not provide documentation of these lessons learned. Having established a goal of hiring 25-52 new employees per month from October 2016 through April 2017, the Board subsequently faced challenges finding space for staff coming aboard in fiscal year 2017. Specifically, as of October 2016, the Board was reconfiguring its office space to accommodate the planned 242 new FTEs in fiscal year 2017, employing nearly all of its conference rooms, and developing a plan for using telework and office sharing to accommodate staff until the space is available for them, according to officials. Training: As of February 2017, VA rolled out training for newly hired attorneys in fiscal year 2017, which includes 4 weeks of training and 8 additional weeks of one-on-one mentoring. VA also stated that its Office of Knowledge Management was expanded with additional staff resources to ensure training materials were up to date. However, in November 2016, officials reported that the Board was still in the process of updating various aspects of its training curriculum, such as how to support conducting work in a virtual environment, which is consistent with the agency’s plans to increase telework as a way to manage space restrictions for new staff. In its comments to this report, VA did not provide updates on this effort, as of February 2017. As of October 2016, it was unclear how the Board’s 2017 recruiting, hiring, and training efforts will be adjusted to support the agency’s proposed hiring surges in 2018 or its proposed process reform. For example, the Board has not yet determined how it will meet the space needs for any additional growth associated with hiring surges proposed for fiscal year 2018, although more detailed planning in advance might have better prepared VA for bringing aboard 242 FTEs in fiscal year 2017. In addition, VA officials stated in February 2017 that draft training for the proposed new appeals process had been prepared based on statutory language, although these draft documents were not included in VA’s comments. Federal strategic planning guidance calls for an agency to have clear plans and goals, and regularly assess its human capital approaches through assessments, as well as through data-driven human capital management to improve its ability to maximize the value of human capital investments while managing related risks. Conversely, a lack of detailed workforce plans and mitigation strategies prior to proposed hiring surges in 2018, as well as potential process reform, further places VA at risk of not being ready to accommodate another quick and much larger increase in staff, or to train them in accordance with either the legacy or proposed reform process. VA collaborated with key stakeholders in developing its proposed appeals process reform framework and related implementation plans, which is consistent with sound practices for business process redesign. Sound redesign practices suggest coordinating with stakeholders in developing and implementing plans to obtain and maintain buy-in from start to finish and to identify and address disagreements. In developing its proposal, Board and VBA officials engaged stakeholders from 11 organizations— including VSOs that represent veterans in appeals hearings before VBA and the Board—in discussions to design a streamlined appeals process. Officials we interviewed from three of four VSOs, all of whom participated in the discussions, noted that VA’s resulting process proposal addressed both the agency’s desire to expedite appeals resolutions and stakeholder desires that the new process be fair to veterans. For example, VA identified and prioritized key concerns and found areas of consensus with VSOs. VA officials stated that they plan to continue to discuss appeals process reform (among other topics) at regular meetings with stakeholders, during which they will have an opportunity to provide feedback on previously unforeseen issues. VA officials said that as process reform is implemented, the agency will invite local VSOs to training, and share training materials and provide briefings to them and other stakeholder groups. While VA has achieved broad agreement internally and with VSOs on its proposed process reform, there are several unaddressed gaps in VA’s business case for implementation that introduce the risk of not producing the desired results, as follows. To develop a business case for implementing process change, sound redesign practices suggest first mapping and analyzing the target process to understand the cause and cost of performance breakdowns, and assessing potential barriers, costs, and benefits of alternative processes. This, in turn, would inform the selection of a feasible alternative with a high return on investment, and the development of a business case that describes benefits, costs, and risks. However, due to IT limitations, VA lacks data to inform and confirm its understanding of the root causes of lengthy time frames. For example, VA lacks complete historical data on the extent to which submission of new evidence and multiple decisions and appeals occur, and thus cannot determine the impact of its current, open- ended process on appeals decision timeliness. To shed light on root causes, VA analyzed 10 appeals decisions that it found took a long time to adjudicate to illustrate extreme examples of cases being re- reviewed under VA’s open-ended process—referred to by VA as “churning.” However, VA cannot know the full extent to which churning might be occurring because, according to VA officials, the way data are stored made it difficult, if not impossible, to assemble a complete historical picture prior to December 2012. To help develop baseline data, VA analyzed the average number of decisions per appeals phase for several recent fiscal years, and, according to VA officials, they are still endeavoring to piece together additional historical baselines for performance. Further, although it was appropriate for VA to develop its proposed reform in consultation with internal and external experts, the agency did not test alternatives using data-driven, cost-effective methods suggested by sound redesign practices. Finally, as noted previously, in modeling staff resources needed under its proposed process reform, VA relied on assumptions—about the percent of veterans who will refile, will appeal to the Board, and will submit new evidence—that have direct implications for projections of appeals workloads, time frames, and cost. However, VA did not perform sufficient sensitivity analyses to help estimate a range of potential outcomes—analyses that might help VA understand the likelihood that the new process could be more costly and time- consuming in practice than anticipated, for example, if a higher percent of veterans file with the Board, submit new evidence, and request hearings than expected. These gaps notwithstanding, VA made some progress planning for potential implementation of proposed process reform in a manner generally consistent with sound planning practices for process redesign and change management, although some important details are still absent. According to sound planning practices, implementation is the most difficult phase of business process redesign. An agency must manage human capital and technical issues as it turns an idea into reality and overcomes potential resistance to change. To ensure an orderly transition, sound planning practices suggest following a comprehensive implementation plan that includes several key activities, such as establishing a transition team and developing a comprehensive plan to manage implementation. Consistent with this, as of October 2016, the Board and VBA had identified general time frames and offices responsible for key implementation efforts. Based on its staff modeling efforts, VA also identified how many FTEs it expects it will devote to processing cases under the current process versus a new one, should it be implemented. Also, per sound planning practices, a comprehensive plan should address workforce training and redeployment issues (including working closely with employee unions to minimize potential adverse effects). Consistent with this, as of October 2016, VA had outlined general steps and time frames for training of staff and communicating with the unions. While VA’s high-level implementation plan included many components suggested by sound practices, key details had yet to be addressed. In particular, VA’s general timetables and plans to date have not addressed in any detail how it will implement a new process while simultaneously working to reduce the appeals inventory under the current process. For example, the agency has not explained how or who will track timeliness of appeals of the old compared to the new process, and how decisions will be made to ensure the agency is devoting an appropriate share of resources to both processes. The lack of a detailed plan for managing this transition exposes the agency to risk that veterans whose appeals are pending under the old process may experience significant delays relative to those under a new process. The Board recognized the need to ensure fairness to veterans with appeals pending under the current process, and indicated that while legislation is pending that would authorize a new process, it will continue to develop plans for managing the two processes in parallel. Sound practices for process redesign and change management also suggest having risk mitigation strategies—in particular, pilot testing—to ensure moving successfully to full implementation. Pilot testing provides agencies opportunities to evaluate the soundness of new processes in actual practice on a smaller scale, and to refine performance measures, collect and share implementation problems and solutions, correct problems with the new design and refine the process prior to full implementation, and build capacity among unit managers to lead change. Sound redesign and change management practices both suggest that pilot tests should be rigorously monitored and evaluated, and that further roll-out occur only after the agency’s transition team has taken any needed corrective action and determined that the new process is achieving previously identified success criteria. As noted above, pilot testing is not the only method of achieving these risk mitigation goals, but sound planning practices suggest pilot testing is an important, often necessary approach for ensuring successful implementation when undertaking significant institutional change. Contrary to sound practices, VA officials stated they do not want to pilot proposed appeals reform, even though VA’s proposed reform can be considered complex. VA’s reform plans qualify as complex because in addition to implementing a new process, the agency must still manage a large inventory of appeals under the old process while hiring and training a large number of staff and implementing IT improvements. Occurring together, these efforts involve significant change and uncertainty and will require management oversight across a broad range of efforts. In addition, VA’s proposed process reform and other initiatives affect VBA’s regional offices spread across the country, as well as the centrally located Board, thereby further increasing complexity of implementation. VA officials also stated that the proposed process reform, which has been thoroughly vetted with stakeholders, has broad support, and noted their view that the risk of fully implementing change is outweighed by the cost of delay. VA’s rationale for not pilot testing centers on what they describe as widespread consensus that the current process is “fundamentally broken” and provides “inadequate service to veterans with a high percentage of wasted effort.” VA assumed that a pilot test authorized by Congress would include a sunset date with a default reversion to the current system, which they said would introduce uncertainty into the agency’s planning efforts and a reliance on subsequent, time consuming legislation before the conclusion of the pilot. VA officials stated that piloting with a sunset date would require the agency to expend additional resources and time to conduct parallel planning for reverting to the old system upon the sunset date. VA stated that pilot testing the new process for some veterans would be perceived as inequitable, despite VA having previously supported pilot testing a new appeals process. VA officials concluded that they have not identified any risk that would justify a pilot, and indicated that they plan to mitigate risk with a strong implementation plan. While VA has made a compelling case for reforming the appeals process, as noted previously, VA’s business case for its proposed reform in some instances relies on unproven assumptions and limited analyses of its current process that introduces risk in VA’s plans for full implementation. Importantly, VA assumes that because the current framework is “fundamentally broken,” its proposed new framework will necessarily be a better option. However, VA made this decision lacking complete data on the root cause of lengthy appeals under the current process, and without analyzing barriers, costs and benefits of feasible alternatives using cost effective methods, such as computer simulations. VA correctly notes that pilot testing prior to full implementation would slow down an overhaul of the current system, thus countering the short-term net benefit that the agency expects to realize from such an overhaul. However, VA has not acknowledged that pilot testing the new process in a more limited fashion could greatly increase the probability of long-term success by decreasing the chance that a new system will experience unanticipated problems that are potentially more widespread and therefore costly to remedy. The inclusion of risk mitigation strategies such as pilot testing does not, as VA asserts, “imply that the status quo is not in dire need of sweeping reform” but rather balances the urgency of the current problem, the technical complexity of an overhaul, and the potential for unforeseeable complications. In light of this and the previously discussed inconsistencies in following sound planning practices, pushing forward with full implementation without testing how process reform unfolds and interacts with other efforts in actuality, may lead VA to experience implementation challenges and setbacks that could undermine efficiencies and other outcomes resulting from planned reforms. In contrast, if VA were to pilot test the proposed appeals process reform, implementation problems encountered could be identified and resolved prior to full implementation. This could lead to smoother implementation and better outcomes overall. Further, resources that would otherwise be diverted to full implementation of process reform across the organization could be focused on its current inventory of appeals. VA will also have additional time and managerial capacity to recruit and train new staff and develop and implement a communication and outreach strategy in time for full implementation of the new process. Finally, if risk mitigation strategies demonstrated that process reform would be more costly and detrimental to time frames and workloads than predicted, a decision to modify or fix the process at that juncture would be made with more information and less impact on the agency overall. Whether VA conducts pilot testing or not, VA has not yet developed a plan for closely monitoring implementation or developed a strategy for assessing the success of its proposed process reform. Sound planning and redesign practices suggest that the transition team develop metrics and data gathering procedures, define success criteria, measure performance carefully, and take corrective action of any pilot test before proceeding to full implementation. Sound practices also suggest the agency develop meaningful performance measures—generally a mix of outcome, output, and efficiency measures—tied to overall goals of the project, and that project goals include a mix of intermediate goals to be met at various stages during the implementation phase. That way, the agency can start to show a return on investment in the early stages of implementation. To date, VA has identified several broad metrics generally reflecting outcomes, output, and efficiency—such as veteran survey results, wait times, and inventories—that it plans to use to track and assess process improvements. VA also established separate timeliness measures for the Board and VBA that it will use in its annual performance reports. While these broad metrics and goals are appropriate, they fall short of sound practices for monitoring and assessing process change in several respects. First, VA has not developed a dashboard or balanced scorecard, or otherwise identified how it will closely monitor progress, evaluate its efficiency and effectiveness, and identify trouble spots. For example, although VA has stated that it is developing a dashboard to measure performance under its proposed appeals process, VA has not yet indicated whether, how, and with what frequency it will monitor wait times and inventories under the new versus current processes. As a result, it is not clear how VA will determine whether veterans with appeals pending under the current process are receiving equitable treatment and not experiencing significant delays relative to those under the new process. It is also unclear the extent to which VA will systematically monitor staff productivity and IT processing, which may affect its ability to determine whether assumptions are being met to help pinpoint corrective action (e.g., whether staff need more training, VA’s communication and outreach efforts are working as expected, or process reform itself is achieving desired results). Further, VA has not established interim goals or criteria for success to help determine whether initial implementation is achieving intended results. Interim goals and criteria could include specific timeliness improvements for process steps and outcomes, such as average time for VBA or the Board to reach decisions under new appeals options. If VA pursues pilot testing, such goals or success criteria will help determine whether the new process is sufficiently successful to justify full implementation. Second, although streamlining the current open-ended process was central to VA’s business case for its proposed process reform, as noted previously, VA currently lacks sufficient data to assess the extent to which process reform will improve on the open-ended nature of the current process. VA officials said that they plan to work to incorporate capabilities into Caseflow to piece together historical baselines for performance. VA also plans to develop new baseline and historical data on aspects of the appeals process that affect the timeliness of final decisions so that they can be compared to the new process. While these are positive steps, it remains to be determined how or whether VA will be able to measure the extent to which its proposed process—which would allow the veteran to appeal multiple times—is an improvement over “churning” associated with the old process. Lastly, the new timeliness measures that VA plans to report to Congress and the public lack transparency on whether overall appeals resolution timeliness is improving from the veterans’ perspective. In its fiscal year 2015 performance report, VA stopped reporting its average appeals resolution time measure, which included appeals decisions made by both VBA and the Board. VA officials said they considered this measure inadequate because neither VBA nor the Board has full control over making improvements to performance under this measure. VA officials told us the measure does not appropriately provide insight into the appeals process because of the variety of appeals paths and wait times veterans experience. However, the combined measure would provide a basis for comparing timeliness under the old versus new process, and would provide historical perspectives on changes in timeliness from the point of view of a veteran who may file appeals with both VBA and the Board before his or her case is resolved. VA officials stated the agency will continue to track this measure internally, but they will not include it in VA’s annual performance reports. Instead, they plan to report on VBA and Board timeliness separately. VA also stated that it will not use this measure to evaluate success of the new process because it considers a timeliness measure covering both VBA and the Board to be inappropriate. VA has generally planned the implementation of its Caseflow appeals system consistent with sound planning practices. Working with U.S. Digital Service at VA (DSVA)—the group tasked with developing Caseflow—VA outlined an approach that has a clear scope and purpose, which is to better process appeals in a paperless environment and improve automation and productivity. The actions consistent with sound IT planning practices include: Setting goals and objectives: VA plans clearly lay out the need for replacing VACOLS and set forth how Caseflow will address the shortcomings of VA’s current IT system. Its plans also lay out a set of broad milestones in terms of the capabilities that will be added to Caseflow in the future and the ultimate retirement of VACOLS. Identifying and mitigating potential risks: VA planning documents identify a number of risks (such as staffing shortfalls and technical delays) and strategies to mitigate them. In addition, VA is developing Caseflow in an agile process, which officials say will allow VA to continually add new capabilities and be responsive to changing agency needs. In addition, VA officials told us that rather than replace VACOLS at once, the various functions in VACOLS will be reproduced and tested in Caseflow iteratively, and each corresponding function in VACOLS will be left intact until there is reasonable assurance that there will be no impact to VA. Measuring performance: VA plans to develop metrics for each new component of Caseflow that is implemented. For instance, VA has developed metrics for the two components that were developed in fiscal year 2016—electronic transfer of cases to the Board and a system to electronically access documents from VBMS—which specifically assess the performance and effect of those components. As mentioned earlier, VA also plans to create a Caseflow dashboard that will provide metrics on the effect of IT improvements on timeliness of the appeals process. Identifying organizational roles and responsibilities: VA entered into a memorandum of understanding with DSVA and the Board that outlines priorities, and a working relationship for developing Caseflow. In addition, the memorandum states that DSVA requires all initiative partners within VA to have a single point of contact with the authority to make decisions on behalf of their component. While VA’s plans for replacing VACOLS take steps to mitigate risks, they currently do not include consideration of the timing and implications of VA’s proposed reform efforts. Federal internal control standards state that program managers, in seeking to achieve program objectives, should define objectives clearly to enable the identification of risks. This includes clearly defining what is to be achieved and the time frames for achievement. Additionally, IT investment best practices stress the need for oversight regarding a project’s progress towards predefined schedule expectations. This oversight also includes systems to make corrections regarding schedule and performance slippages. Although VA has laid out the broad capabilities it would like to incorporate into Caseflow going forward, VA has not developed a schedule for completing Caseflow. Specifically, VA could not provide us with firm time frames for when different capabilities will be active in Caseflow. As the Caseflow effort lacks time frames, VA cannot ensure that the system will be completed in time to support the implementation of proposed reforms. Further, VA’s lack of time frames for developing Caseflow may increase the risk of additional costs if the system cannot be developed as quickly as anticipated. Sound practices specific to project scheduling state that project planning is the basis for controlling and managing project performance, including managing the relationship between cost and time. In a prior GAO report on VBMS development—which was also developed in an agile process—we reported that the agency encountered some delays with its initial deployment of key functions of VBMS, and that its lack of a schedule made it difficult to hold program managers accountable for meeting time frames and demonstrating progress. In addition, VA has not started planning and determining the changes that would be needed for Caseflow if and when appeals reforms are implemented. VA staff said that the agile approach they are using allows them to quickly respond to changing needs, and VA Office of Information and Technology officials told us that they will not begin planning for such changes until reform legislation is passed. As stated earlier, sound IT planning practices suggest that implementation plans include specific time frames and approaches needed to implement new systems, as well as the consideration of potential risks and mitigation strategies. As such, and given the absence of a schedule for completing Caseflow, VA further risks having an IT system that is not completed in a timely manner or, even if in place in time, falls short of meeting VA’s needs. With an already large inventory of pending appeals—and expectations of further growth—VA has taken steps to bolster capacity and improve the efficiency and effectiveness of its disability appeals process. Specifically, VA hired and proposed hiring more staff, is moving forward with plans to upgrade its IT systems, and has proposed bold reforms to streamline its appeals process. In planning and executing these approaches, VA took several positive steps in line with sound planning practices—such as comparing different options for increasing future staffing resources, collaborating with external stakeholders to develop a streamlined process proposal, and outlining a vision for upgrading outdated IT systems. Nonetheless, VA’s plans do not account for the significant challenges that remain. Above all, its proposal to implement appeals reform at the Board and across VBA’s regional offices is ambitious, and as a result, VA may be exposing itself to unforeseen risks and setbacks that could slow progress toward improving appeals decision timeliness. More specifically, VA has proposed implementing process reform while also hiring more staff and upgrading its IT, which are challenging efforts in their own right. Additionally, VA does not have any plans to pilot test its proposal—a sound and often necessary practice for experiencing, evaluating, and refining significant institutional change on a smaller scale prior to full implementation. At the same time, VA plans for hiring more staff and upgrading IT lack key details (for example, on how VA will train and find working space for new staff, or a schedule for when and how system changes might be integrated with the proposed streamlined process), exposing VA to risks of delays, inefficiencies, or other setbacks caused by not anticipating needs or a misalignment of efforts. VA also did not sufficiently apply sensitivity analysis when projecting staffing needs with or without process reform, which could affect the agency’s ability to mitigate any potential risks if assumptions are not met. Lastly, VA lacks a robust monitoring plan to help assure that unforeseen problems will be quickly and effectively addressed, and has not yet developed a strategy with appropriate interim goals for process reform, and overall goals for appeals process timeliness, to gauge whether the agency’s efforts are having the desired result and reflect an improvement over prior practices. Until VA incorporates these sound planning practices, the agency lacks reasonable assurance that its proposed reform will improve the overall efficiency of the appeals process and timeliness of disability appeals decisions. To improve VA’s ability to successfully implement appeals process reform, Congress should consider requiring that reforms of the VA disability appeals process be subject to a pilot test. To aid in the development of such a pilot test, Congress could require the Secretary of Veterans Affairs to propose options that would allow the agency the flexibility to test and refine the new process in a cost-effective and efficient manner, while ensuring pre-established interim goals and success criteria are being met prior to full implementation. To further align efforts to address appeals workload and improve timeliness of decisions, and reduce the risk that efforts will not go as planned, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits; the Chairman, Board of Veterans’ Appeals; and the Chief Information Officer, as appropriate, to: 1. Ensure development of a timely, detailed workforce plan for recruiting, hiring and training new hires. In particular, this plan should: (1) include detailed steps and timetables for updating training curriculum (such as preparing decisions in a virtual environment) and ensuring office space (such as telework guidance); and (2) incorporate risk mitigation strategies that consider how the timing of recruitment and training dovetails with uncertain time frames for implementing a new appeals process. 2. Develop a schedule for IT updates that explicitly addresses when and how any process reform will be integrated into new systems and when Caseflow will be ready to support a potential streamlined appeals process at its onset. 3. Conduct additional sensitivity analyses based on the assumptions used in projection models to more accurately estimate future appeals inventories and timeliness. In doing so, consider running additional analyses on how these factors, in conjunction with one another, may affect the timeliness and cost of deciding pending appeals. 4. Develop a more robust plan for closely monitoring implementation of process reform that includes metrics and interim goals to help track progress, evaluate efficiency and effectiveness, and identify trouble spots. To better understand whether appeals process reform, in conjunction with other efforts, has improved timeliness, we recommend the Secretary of Veterans Affairs direct the Under Secretary for Benefits; the Chairman, Board of Veterans’ Appeals; and the Chief Information Officer, as appropriate to: 5. Develop a strategy for assessing process reform—relative to the current process—that ensures transparency in reporting to Congress and the public on the extent to which VA is improving veterans’ experiences with its disability appeals process. We provided a draft of the report to VA for its review. In written comments, VA disagreed with one of our recommendations and agreed in principle with the other five. We have reproduced VA’s comments in appendix I and have incorporated them—as well as technical comments provided by VA—into the report, as appropriate. In its comments, VA agreed with us that improving the efficiency and effectiveness of its appeals process is an ambitious undertaking, and we commend VA for the many steps it has taken, including collaborating with stakeholders to develop the framework for a new process. We agree that obtaining the consensus of internal and external experts—including veterans service organizations—demonstrates important progress. We disagree, however, that such consensus negates the need for more detailed plans and robust risk mitigation strategies. While it is true that VA has made noteworthy progress developing an implementation plan to guide its efforts, we found the plan lacked important details, such as: how VA will monitor for interim success and trouble spots, including whether the agency has appropriately distributed resources among the new and old processes; how it will mitigate risk of implementation challenges or setbacks, and reduce their negative impact; and how it will measure whether the new process is improving overall appeals resolution timeliness from the veteran’s perspective. VA officials also said that VA has extensive experience in organizational change management, but it is not clear how some of the practices VA used in past transformation efforts are applicable to appeals reform, and we are concerned that VA could not provide further information on what these practices include or how they are relevant. We believe implementing all of our recommendations will increase the likelihood that VA’s efforts to improve the efficiency and effectiveness of its appeals process will be successful. For the five recommendations that VA concurred with in principle, VA described planned actions to address them and stated that it also considered the actions complete and requested we close the recommendations. However, we believe VA still needs to take actions to address those recommendations, as noted more fully below. VA disagreed with a draft recommendation that it incorporate pilot testing of its proposed appeals process into implementation plans and pursue necessary legislative authority. In its comments, VA noted that the appeals process is broken and that piloting a new process would result in further delays to veterans appealing their disability decisions. VA disagreed with GAO’s finding that it had proposed the new process without analyzing feasible alternatives, noting that the agency designed the new process based on the collective experience of internal and external experts, and that these experts reached consensus on a new design that will be beneficial to veterans, the agency, and taxpayers, among others. VA noted that it has carefully assessed risks, identified a number of risk mitigation strategies, modeled a number of different scenarios, and developed a detailed implementation plan. When we reviewed these efforts, however, we found three primary shortcomings that need to be addressed. First, VA did not have the data it needs to fully understand the extent to which the current process has contributed to lengthy appeals time frames, which raises questions about whether the proposed process will address the root cause or causes of untimely appeals decisions. Specifically, VA lacks historical data on the extent to which the introduction of new evidence increases time frames. Second, VA’s list of potential risks and risk mitigation strategies did not always include steps for mitigating the identified risks. Third, we found that VA’s implementation plans lacked details on how it will carry out key aspects of appeals reform, including how it will monitor the timeliness of appeals decisions under the old process compared to the new appeals process, while also hiring additional staff and integrating changes into the Caseflow IT system as discussed below. In VA’s comments, it also stated that piloting a new appeals process “would raise constitutional issues and prompt litigation.” We acknowledge that changing an adjudicatory process for determinations of benefits may prompt litigation. However, VA has not clearly articulated why pilot testing as a category is unconstitutional or why pilot testing poses unique constitutional issues. Further, as noted in the report and in VA’s comments, VA previously supported H.R. 800 in the 114th Congress, which would have directed VA to conduct an opt-in pilot process where a veteran could present a limited amount of new evidence and the Board, to the extent practical, would decide cases within one year. While GAO did not take a position on that bill, or its specific approach to pilot testing, changes of this magnitude in such a complex program justify some form of pilot testing to ensure process reform is implemented successfully and ultimately achieves VA’s goals. As noted in the report, pilot testing is recognized to be a sound planning practice and an important, often necessary approach for ensuring successful implementation when undertaking significant institutional changes. Until VA pilot tests its appeals reforms, it will lack data to properly plan for and overcome the challenges that will likely arise during implementation. For example, VA may encounter difficulties making needed process changes while simultaneously implementing other logistical requirements, such as hiring and training new staff and updating its IT system. By not pilot testing, VA is missing a valuable opportunity to refine its implementation strategy by first seeing how process reform will unfold on a smaller scale. We believe that the potentially negative consequences of delaying full implementation are far outweighed by the benefits that can be realized through piloting. For example, piloting could help avoid delays and expenses caused by the need to re-work the process after full scale implementation. In light of VA’s disagreement with our draft recommendation, we removed the recommendation and now pose a matter for congressional consideration. Specifically, to improve VA’s ability to successfully implement appeals process reform, Congress should consider requiring that reforms of the VA disability appeals process be subject to a pilot test. To aid in the development of such a pilot test, Congress could require the Secretary of Veterans Affairs to propose options that would allow the agency the flexibility to test and refine the new process in a cost-effective and efficient manner, while ensuring pre-established interim goals and success criteria are being met prior to full implementation. VA concurred in principle with the draft recommendation that it finalize a detailed workforce plan that includes steps for training, support, and risk mitigation strategies. VA noted that, in addition to currently implementing a fiscal year 2017 workforce plan to hire additional staff, as discussed in the report, among other efforts it has recently launched new attorney training and continues to collaborate across the agency to identify space where new staff can be located. We have incorporated these updates into our report, as appropriate. In light of these efforts, and because future steps, such as developing training materials on the new appeals process, are contingent upon appeals reform legislation, VA stated it considers this recommendation complete and requested closure. While we recognize that VA has made progress and that certain actions, such as training on a new process, is contingent upon reform legislation, we disagree that the recommendation should be closed. As noted in our report, we found that VA’s final recruiting, hiring, and training plans lacked important details. For example, VA officials were still updating training curriculum that supports work conducted in a virtual environment and which is critical for managing space restrictions for new staff. Without a detailed workforce plan in place, VA cannot assess the success of its human capital approach, maximize its investments, or fully mitigate risks. More detailed workforce plans would help VA avoid the risks that staff will not be hired in time, not be properly trained, or not have the support necessary to process appeals. Waiting until legislation is enacted magnifies these risks. We believe additional action is needed to meet the intent of this recommendation; we also clarified the recommendation language to state that VA needs a more detailed plan. VA concurred in principle with our recommendation that it develop a schedule for IT updates that lays out when and how any process reforms will be integrated into its Caseflow system. More specifically, VA noted that it will rely on the agile process to develop Caseflow—whereby new functions are continually added to the system as new user needs or policy changes arise—and does not plan to define schedules beyond 6 months. Given that Caseflow development related to the new appeals process is dependent on the enactment of new legislation, VA stated it considers this recommendation to be complete. While it is true that the agile process can help mitigate risks and avoid cost overruns and delays, we do not believe this approach precludes VA from taking additional steps to consider the scope of potential changes required by a new appeals process and have a broad plan in place to ensure that all aspects of the new process are adequately supported by Caseflow. We believe it is especially important for VA to have specific time frames for completing Caseflow considering the scope of the changes being proposed. Moreover, VA noted that components of Caseflow developed so far will not need to be significantly changed because of appeals reform legislation being enacted, but VA did not provide documentation to support this assertion. In light of these issues, we believe VA has not yet met the intent of this recommendation. VA concurred in principle with our recommendation that it should conduct additional sensitivity analyses around the assumptions used in its models. VA noted that sensitivity analyses are valuable and that it has focused its efforts on risks its staff identified as most likely, such as variations in staffing and productivity, and the effect of remands. VA stated it would continue to analyze, update, and refine its modeling, and considers this recommendation to be complete. While we recognize the logic of focusing modeling resources on key variables, VA did not fully examine three of the four hiring surge options it proposed. Moreover, VA did not assess the compound effect that would result in changing multiple assumptions at once. Given the complexity of proposed changes and the number of variables beyond VA’s control, we believe that additional analyses are needed to identify potential risks that may warrant additional mitigation strategies. In addition, if VA goes forward with appeals process reform and begins to collect real-time data, these data could improve modeling accuracy and serve as a valuable management tool. VA concurred in principle with our recommendation to develop a more robust plan for closely monitoring the implementation of its process reform, that includes metrics and interim goals to help VA track progress, evaluate efficiency and effectiveness, and pinpoint trouble spots. VA agreed that developing such a plan is valuable for monitoring the implementation of process reform, and should include metrics and interim goals. However, VA stated that it considers this recommendation complete, noting that preparing such a detailed plan depends on appeals reform legislation being enacted, and that it will incorporate specific goals and metrics as it moves towards implementation. While we recognize that VA cannot assume to know the exact provisions that may be included in future enacted legislation, nor can it predict when appeals reform might be enacted, we consider having a more robust monitoring plan to be essential to the successful implementation of a new appeals process. Moreover, the absence of such a plan raises questions as to how VA will ensure appropriate resources are devoted to managing appeals under the new versus old process, or intended results are achieved as the new process is implemented. VA concurred in principle with our recommendation to develop a strategy to transparently report to Congress and the public on veterans’ experiences with the new appeals process. VA noted that it is already developing timeliness goals for three of the four appeal options in the proposed new process, as discussed in our report. VA said it also plans to measure success of the new process with results from customer satisfaction surveys and is developing a dashboard for internal performance monitoring. VA did not agree that measuring overall appeals resolution timeliness is an appropriate a measure and believes tracking time frames for each of the options separately is more appropriate. While we agree that metrics based on the different options could be valuable for VA, the Congress, and the public, we disagree that VA’s focus on measuring timeliness by option is in the best interest of the veteran. Because veterans may pursue more than one option under VBA, the Board or both, we believe that VA’s approach does not take into account the veteran’s perspective of how long it took for them to receive a final appeal decision. Metrics from the veterans’ overall perspective would complement, not replace, metrics for VBA, the Board, and each option. Further, because VA’s approach does not allow VA to compare the new process with the old or to determine whether the new process represents an improvement over the old process, we believe it does not promote transparency in reporting to the Congress and the public. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Under Secretary for Benefits, the Chairman of the Board of Veterans Appeals, and the VA’s Chief Information Officer. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Michele Grgich (Assistant Director), Melissa Jaynes (Analyst-in-Charge), Daniel Concepcion, and Greg Whitney made key contributions to this report. Other key contributors to this report include James Bennett, Mark Bird, David Chrisinger, Clifton Douglas, Alex Galuten, Mitch Karpman, Sheila R. McCoy, Claudine Pauselli, Almeta Spencer, Eric Trout, Walter Vance, and Tom Williams. For the purpose of evaluating VA’s efforts to improve its appeal processing, we identified best practices and other criteria related to staffing, process reform, and IT upgrades identified in prior GAO products and other publications. These included government-wide internal control standards; key principles for effective strategic workforce planning; business process reengineering (or redesign) best practices, and information technology planning principles. We also reviewed additional guidance on project management. Schedule Assessment Guide: Best Practices for Project Schedules. GAO-16-89G. Washington, D.C.: December 2015. Veterans Benefits Management System: Ongoing Development and Implementation Can Be Improved; Goals are Needed to Promote Increased User Satisfaction. GAO-15-582. Washington, D.C.: September 1, 2015. Standards for Internal Control in the Federal Government. GAO-14-704G. Washington, D.C.: September 2014. Human Capital: Strategies to Help Agencies Meet Their Missions in an Era of Highly Constrained Resources. GAO-14-168. Washington, D.C.: May 7, 2014. Managing For Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington D.C.: March 2009. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-04-546G. Washington, D.C.: March 2004. Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity. GAO-04-394G. Washington, D.C.: March 2004. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T. Washington, D.C.: February 3, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. Human Capital: A Self-Assessment Checklist for Agency Leaders. GAO/OCG-00-14G. Washington, D.C.: September 2000. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans. GAO/GGD-10.1.20. Washington, D.C.: April 1998. Business Process Reengineering Assessment Guide. GAO/AIMD-10.1.15. Washington, D.C.: May 1997. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996. Blackburn, Simon, Sarah Ryerson, Leigh Weiss, Sarah Wilson, and Carter Wood, Insights into Organization: How Do I Implement Complex Change at Scale? Dallas, Texas: McKinsey & Company, May 2011. George, Michael L, David Rowlands, Mark Price, and John Maxey. The Lean Six Sigma Pocket Toolbook: A Quick Reference Guide to Nearly 100 Tools for Improving Process Quality, Speed, and Complexity. New York: McGraw-Hill, 2005. Office of Management and Budget. Circular No. A-11, Preparation, Submission, and Execution of the Budget. Washington, D.C.: July 1, 2016. Project Management Institute, Inc. A Guide To The Project Management Body Of Knowledge (PMBOK Guide) Newtown Square, PA.: 2013. | VA compensates veterans for disabling conditions incurred in or aggravated by military service. Veterans can appeal VBA's decisions on their compensation claims, first to VBA and then to the Board, a separate agency within VA. In fiscal year 2015, more than 427,000 appeals were pending and veterans waited over 3 years on average for decisions. Of this total, about 81,000 were pending at the Board and the average cumulative time veterans waited for a decision by the Board in 2015 was almost 5 years. This report examines VA's approaches to address challenges it identified as contributing to lengthy appeals processing times, and the extent to which those approaches are consistent with sound planning practices. GAO focused mainly on the Board, which experienced an increase in workload of about 20 percent from fiscal year 2014 to 2015. GAO reviewed VA's proposed plans and actions and compared them to sound practices relevant to workforce planning and implementing process redesign and new information technology identified in federal guidance, such as internal control standards, and prior GAO work. GAO also analyzed VA's data for fiscal years 2011-2015 (the most recent available) on appeals decision timeliness and workloads; reviewed relevant federal laws, regulations, and planning documents; and interviewed VA officials and veterans service organizations. The Department of Veterans Affairs' (VA) is taking steps to improve the timeliness of its benefit compensation appeals process, in which veterans who are dissatisfied with claims decisions by the Veterans Benefits Administration (VBA) can appeal first to VBA, and then to the Board of Veterans' Appeals (the Board). VA has taken actions related to increasing staff, reforming the process, and updating information technology (IT), which are consistent with relevant sound planning practices. However, gaps in planning exist, thereby reducing the agency's ability to ensure that these actions will improve the timeliness of disability appeals decisions. Increase staff : VA determined that staff resources have not sufficiently kept pace with increased pending appeals, and concluded that additional staff are needed, particularly at the Board, to improve timeliness and reduce its appeals inventory. The Board received approval to hire more staff in fiscal year 2017, and expects to need an additional hiring surge beginning in fiscal year 2018. As of October 2016, officials estimated that if the agency does not take any action, such as increasing staff in 2018, veterans may have to wait an average of 8.5 years by fiscal year 2026 to have their appeals resolved. Consistent with sound workforce planning practices, VA modeled different options for increasing staff levels to support its conclusion that staff increases in conjunction with process change would reduce the appeals inventory sooner. However, contrary to sound practices, VA often used fixed estimates for key variables in its models—such as staff productivity—rather than a range of estimates (sensitivity analysis) to understand the effect variation in these key variables could have on staffing needs. Also, VA's written workforce plans—which cover recruiting, hiring and training—did not include detailed steps, time frames, and mitigation strategies consistent with sound workforce planning practices. For example, while VA has established a center for excellence in hiring to focus on recruitment and hiring the agency has not finalized training or telework plans or otherwise mitigated space constraints that it encountered for hiring staff in fiscal year 2017. Without a timely, detailed workforce plan, VA risks delays in hiring and preparing staff to help manage workloads as soon as possible. Reform process: VA determined that new evidence—which a veteran can submit at any point during his or her appeal—inefficiently causes an additional round of reviews, and thus delays appeals decisions, and in response it proposed legislation (not enacted) to streamline the process. Consistent with sound practices for process redesign, VA worked with veterans service organizations (VSO) and other key stakeholders in developing the proposal, and continued to update VSOs about the development of its implementation plans. VA's proposed reform is promising, but there are several gaps in its implementation plans. In particular, VA plans to fully implement appeals process reform at the Board as well as at VBA regional offices across the country while it concurrently manages the existing appeals inventory, a hiring surge, and planned system changes discussed below. However, VA's plans run counter to sound redesign practices that suggest pilot testing the process changes in a more limited fashion before full implementation, in order to manage risks and help ensure successful implementation of significant institutional change. VA officials told GAO that pilot testing—which would require legislation to implement—will prolong a process that is fundamentally broken and delay urgently needed repairs. However, without pilot testing VA may experience challenges and setbacks on a broader scale, which could undermine planned efficiencies and other intended outcomes. In addition, VA has not sufficiently identified how it will monitor progress, evaluate efficiency and effectiveness, identify trouble spots, and otherwise know whether implementation of its proposed process change is on track and meeting expectations. The absence of a robust monitoring plan with success criteria is inconsistent with sound planning practices for redesign and places the agency at risk of not being able to quickly identify and address setbacks. In addition, the timeliness measures that VA currently plans to report to Congress and the public lack transparency because they focus on individual parts of the agency and pieces of the new process rather than overall appeals resolution time from the veterans' perspective. Without a strategy for assessing the proposed new process that includes comprehensive measures, VA, the public, and Congress cannot know the extent to which the proposed process represents an improvement over the old process. Update technology: VA determined that the computer system supporting its appeals process is outdated, prone to failures, and does not adequately support electronic claims processing. VA proposed a new IT system to reduce delays in appeals to the Board, and better integrate data from other systems. Consistent with sound practices, VA clearly laid out the scope and purpose of IT upgrades, and identified risks and strategies to mitigate them. However, the agency's plan lacks details for how and when its new system will be implemented, as suggested by sound planning practices for implementing new technology. Without a detailed schedule, VA risks not having new systems aligned with potential changes in the appeals process when they are implemented. GAO is making five recommendations to VA and one matter for congressional consideration. VA should: apply sensitivity analyses when projecting staff needs, develop a more timely and detailed workforce plan, develop a robust plan for monitoring process reform, develop a strategy for assessing process reform, and create a schedule for IT improvements that takes into account plans for potential process reform. VA concurred in principle with the five recommendations, but believes it has met the intent of those recommendations and does not need to take additional action. GAO disagrees and—while recognizing VA's ongoing efforts—believes further action is needed on all five recommendations to improve VA's ability to successfully implement reforms, as discussed in the report. VA disagreed with an additional draft recommendation that it incorporate pilot testing of its proposed appeals process into implementation plans and pursue necessary legislative authority. VA cited its perspective that the appeals process is broken and that piloting a new process would result in further delays to veterans appealing their disability decisions. GAO maintains that the benefits of pilot testing—which provides an opportunity to resolve implementation challenges and make refinements to the process on a smaller scale—outweigh the potentially negative consequences of delaying full implementation. Therefore, GAO removed the recommendation and added a matter for congressional consideration stating that Congress should consider requiring that appeals process reform be subject to a pilot test. |
BLM is responsible for managing approximately 261 million acres of public land, over 99 percent of which is located in 12 western states, including Alaska. Approximately 90 percent of this land is open to the public for hardrock mineral exploration and mining. Less than one-tenth of 1 percent of BLM land is affected by existing hardrock operations. Figure 1 shows the BLM land available for hardrock operations. Hardrock operations consist of three primary stages—exploration, mining, and mineral processing. Operators are responsible for reclaiming the land disturbed by such operations at the earliest economically and technically feasible time, if this land will not be further disturbed. Exploration involves prospecting and other steps to locate mineral deposits. Drilling is the most common exploration tool for identifying the extent, quantity, and quality of minerals within an area. The mining phase includes developing the mining infrastructure (water, power, buildings, and roads) and extracting the minerals. Mineral extraction generally entails drilling, blasting, and hauling ore from pit areas to processing areas. To process minerals, operators prepare the ore by crushing or grinding it to extract minerals. The material left after the minerals are extracted—tailings (a combination of fluid and rock particles)—is then disposed of, often in a nearby pile. In addition, some operators use a leaching process to recover microscopic hardrock minerals from heaps of crushed ore by percolating solvent (such as cyanide for gold and sulfuric acid for copper) through the heap of ore. Through this heap-leaching process, the minerals adhere to the solvent as it runs through the leach heap and into a collection pond. The mineral-laced solution is then taken from the collection pond to the processing facility, where the valuable minerals are separated from the solution for further refinement. Figure 2 provides an overview of the three stages of a hardrock operation using a heap-leaching process. At the earliest feasible time, operators are required to reclaim BLM land that will not be further disturbed to prevent or control on-site or off-site damage. Reclamation practices vary by type of operation and by applicable federal, state, and local requirements. However, reclamation generally involves resloping pit walls to minimize erosion, removing or stabilizing buildings and other structures to reduce safety risks, removing mining roads to prevent damage from future traffic, and capping and revegetating leach heaps, tailings, and waste rock piles to control erosion and minimize the potential for contamination of groundwater from acid rock drainage and other potential water pollution problems. Addressing potential water pollution problems may involve long-term monitoring and treatment. Reclamation costs for hardrock mining operations vary by type and size of operation. For example, the costs of plugging holes at an exploration site are usually minimal. Conversely, reclamation costs for large mining operations using leaching practices can be in the tens of millions of dollars. Hardrock operations on BLM land are regulated by federal and state laws. Under the General Mining Act of 1872 (Mining Act), an individual or corporation can establish a claim to any hardrock mineral on public land. Upon recording a mining claim with BLM, the claimant must pay an initial $25 location fee and a $100 maintenance fee annually per claim; the claimant is not required to pay royalties on any hardrock minerals extracted. The Mining Act was designed to encourage the settlement and development of the West; it was not designed to regulate the associated environmental effects of mining. The number of hardrock operations left abandoned throughout the West after operations ceased is not known but is estimated to be in the hundreds of thousands, many of which pose environmental, health, and safety risks. Until Congress passed the Federal Land Policy and Management Act of 1976 (FLPMA), development of hardrock minerals on public land remained largely unregulated. FLPMA states that the Secretary of the Interior shall take any action necessary to prevent “unnecessary or undue degradation” of public land. Under FLPMA, BLM has developed and revised regulations and issued policies to prevent unnecessary or undue degradation of BLM land from hardrock operations. BLM issued regulations that took effect in 1981 on how these operations were to be conducted. Named for their location in the Code of Federal Regulations, the “3809” regulations classify surface disturbance generated by hardrock operations into three categories: casual use, notice-level operations, and plan-level operations. For all three operation levels, the operator must prevent unnecessary and undue degradation and complete reclamation at the earliest feasible time. BLM issued the revised 3809 regulations, effective in part in January 2001 that, among other things, changed the definition of the types of operations, modified the reclamation requirements, and strengthened the financial assurance requirements. Table 1 describes each type of operation under both the old and new regulations. While the performance standards for reclamation under the 1981 and 2001 regulations remain the same, the 2001 regulations specifically identified the components involved in reclamation. For standards under both regulations, the operator of a notice- or plan-level operation must reclaim the disturbed land at the earliest time that is economically and technically feasible, except to the extent necessary to preserve evidence of the presence of minerals, by taking reasonable measures to prevent or control on-site and off-site damage to federal land. Reclamation must include the following actions: saving topsoil to be applied after reshaping disturbed areas; taking measures to control erosion, landslides, and water runoff; taking measures to isolate, remove, or control toxic materials; reshaping the area disturbed, applying the topsoil, and revegetating disturbed areas, where reasonably practicable; and rehabilitating fisheries and wildlife habitat. The 2001 regulations specified that, as applicable, reclamation components include: isolating, controlling, or removing acid-forming and deleterious regrading and reshaping the disturbed land to conform with adjacent landforms, facilitating revegetation, controlling drainage, and minimizing erosion; placing growth medium and establishing self-sustaining vegetation; removing or stabilizing buildings, structures, or other support facilities; plugging drill holes and closing underground workings; and providing for post-mining monitoring, maintenance, or treatment. The 2001 regulations also significantly strengthened the financial assurance requirements for hardrock mining operations. Under the 1981 regulations, BLM had the option of requiring an operator to obtain a bond or other financial assurances for plan-level hardrock operations and for notice-level operations where the operator had a record of noncompliance. However, BLM rarely exercised this option. In 1990, BLM instructed its officials to require operators of plan-level operations to provide (1) financial assurances of $1,000 per acre for exploration and $2,000 per acre for mining and (2) financial assurances for all estimated reclamation costs for operations that used leaching chemicals and for operators with a record of noncompliance. Under the 2001 regulations, BLM requires all notice- and plan-level hardrock operators to provide financial assurances that cover all estimated reclamation costs for all plan- and notice-level operations before exploration or mining operations begin. Casual-use operations do not have to provide financial assurances. The 2001 regulations amended the types of financial assurances that can be used. The 1981 regulations identified three types of acceptable financial assurances—bonds, cash, and negotiable U.S. securities. BLM could also accept evidence of an existing bond pursuant to state law or regulations if BLM determined that the coverage would be equivalent to the amount that would be required by BLM. Some operations used corporate guarantees, which were allowable under state laws and regulations. In contrast, the 2001 regulations prohibit the use of corporate guarantees for new operations and state that corporate guarantees currently in use under an approved BLM and state agreement cannot be increased or transferred. The 2001 regulations specify the following types of financial assurances as acceptable: surety bonds that meet the requirements of U.S. Treasury Circular 570; cash in an amount equal to the required dollar amount of the financial assurance and maintained in a federal depository account of the U.S. Treasury by BLM; irrevocable letters of credit from a bank or other financial institution organized or authorized to transact business in the United States; certificates of deposit or savings accounts not in excess of the Federal Deposit Insurance Corporation’s maximum insurable amount; negotiable U.S., state, and municipal securities or bonds with a market value of at least the required dollar amount of the financial assurance maintained in a Securities Investors Protection Corporation insured trust account by a licensed securities brokerage firm for the benefit of the Secretary of the Interior; investment-grade securities that (1) have a Standard and Poor’s rating of AAA or AA, or an equivalent rating from another nationally recognized securities rating service, (2) have a market value of at least the required dollar amount of the financial assurance, and (3) are maintained in a Securities Investors Protection Corporation insured trust account by a licensed securities brokerage firm for the benefit of the Secretary of the Interior; certain types of insurance underwritten by a company having an A.M. Best rating of “superior” or an equivalent rating from another nationally recognized insurance rating service; evidence of an existing financial assurance under state law or regulations, as long as the financial assurance is held or approved by the state agency for the same operations covered by the notice or plan of operation, has a value equal to the required amount, and is redeemable by BLM. These financial assurances can include any of the above instruments. In addition, they can include state bond pools, as well as corporate guarantees that existed on January 20, 2001, under an approved BLM and state agreement; or trust funds or other funding mechanisms available to BLM. The 2001 regulations require operators, when BLM identifies a need for it, to establish a trust fund or other funding mechanism to ensure continuation of long-term treatment to achieve water quality standards and for other long-term, post-mining maintenance requirements. Finally, under the 2001 regulations, all notice- and plan-level operators must submit a reclamation plan and an associated cost estimate with its notice or plan of operation and any modifications or renewals. The financial assurance amount is based on the cost estimate. Furthermore, the associated cost estimate must reflect the cost to BLM as if the agency had to contract with a third party to complete reclamation. In addition, BLM issued guidance in February 2003, which was revised in March 2004, setting forth factors that should be considered in developing cost estimates. For example, estimates should include administrative and other indirect costs. The regulations require BLM to periodically review the estimates to determine if the estimate should be updated to reflect any necessary changes in the cost of reclaiming the operation. BLM headquarters manages and oversees hardrock operations as well as its other programs, primarily through its headquarters, 12 state offices, and 157 field offices. Within headquarters, the Minerals, Realty, and Resource Protection group is responsible for administering the mining laws and establishing hardrock operations policies. This office is also responsible for evaluating the effectiveness of policy implementation at the state- and field-office levels. For example, in 2004, BLM conducted a survey of 18 of its 157 field offices to determine, among other things, whether operators had obtained financial assurances as required. Each state office is headed by a state director who reports to the Director of BLM in headquarters. BLM state office delegations of responsibilities for financial assurances vary from state to state. For example, some state offices verify the authenticity of the financial assurance and confirm that financial assurances are payable to BLM. The state offices manage BLM programs and land in the geographic areas that generally conform to the boundary of one or more states. The state offices are Alaska, Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Wyoming, and Eastern States. BLM has little land in the east and the Eastern state office is responsible for all of the states in the east. Figure 3 shows the boundaries of the 12 BLM state offices. The 157 BLM field offices, which are headed by field managers who report to the state directors, are responsible for implementing several BLM programs and policies, including many aspects of the hardrock mining program. The field offices maintain case files on each hardrock operation in their jurisdiction. Field office staffs are generally responsible for, among other things, (1) reviewing notices and plans of operations, along with associated reclamation plans and cost estimates; (2) determining the amount of financial assurances needed to pay reclamation costs; and (3) inspecting hardrock operations for compliance with regulations. In addition, BLM has specialized centers, which are organizationally affiliated with headquarters, to carry out a variety of activities. One of these centers, near Denver, Colorado, administers BLM’s LR2000, which is an automated information system used to collect and store information on BLM land and programs, including hardrock operations. LR2000 includes several subsystems that contain information on hardrock operations and the financial assurances provided by operators. Specifically, the Case Recordation System contains information on hardrock operations, such as the name and address of the operator; the location, type, and size of the operation; and inspection information. The other subsystem—the Bonding and Surety System—contains information on financial assurances, such as the types and amounts of financial assurances and the names of the providers. BLM state and field offices both enter data into LR2000 and thus are primarily responsible for the data’s accuracy and completeness. In most instances, field offices are responsible for entering data about hardrock operations into the Case Recordation System, while BLM state offices are more often responsible for entering data about financial assurances into the Bonding and Surety System. BLM reported that, as of July 2004, hardrock operators were using 11 types of financial assurances, valued at approximately $837 million, to cover reclamation costs on BLM land in 12 western states. Surety bonds, letters of credit, and corporate guarantees accounted for almost 99 percent of this $837 million. However, these financial assurances may not fully cover all future reclamation costs if operators fail to complete required reclamation. BLM reported that it had approximately 2,500 existing notice- and plan- level hardrock operations as of July 2004 and that some of these operations do not have financial assurances, and some have no or outdated reclamation plans and/or cost estimates on which financial assurances should be based. While BLM state office explanations indicated that financial assurances are not yet required for some operations, other explanations indicated that some operations may not be complying with BLM’s requirements. As of July 2004, operators were using 11 different types of financial assurances valued at approximately $837 million to guarantee reclamation costs for BLM land disturbed by hardrock operations, according to our analysis of survey results. Almost 99 percent of the $837 million in financial assurances is in the form of surety bonds, letters of credit, and corporate guarantees. Figure 4 shows the types of financial assurances used, their value, and the percentage of the total value accounted for by each type. BLM reported that all of the current notice- and plan-level hardrock operations on BLM land—2,490 operations—are located in 12 western states. Table 2 shows the states with existing hardrock operations and the types and amounts of financial assurances operators are currently using in each state. The information below describes the types of financial assurances currently being used and BLM state offices’ views of the effectiveness of these assurances in minimizing losses to the federal government if the operator does not complete reclamation. Surety bonds. Surety bonds are a third party guarantee that an operator purchases from an insurance company. As a third party with possible financial responsibility for reclamation, the insurance company has a strong incentive to monitor the operator’s environmental safety record and efforts to fulfill reclamation obligations. If the operator does not complete required reclamation once operations cease, the insurance company has the option of performing the reclamation work or paying the financial assurance value to BLM or the designated state agency for reclamation. According to industry representatives and experts, insurance companies are amenable to issuing surety bonds for hardrock operations for predictable reclamation activities that will occur in a defined time frame. As table 2 shows, operators in 10 of the 12 states with hardrock operations are using surety bonds. In 7 of these 10 states, BLM state offices rated surety bonds as “effective” or “very effective” for minimizing losses to the federal government; in the other three states, BLM state offices reported that they had no experience (that is, they had not taken steps to obtain funds from the financial assurance provider) in using this type of assurance in minimizing losses to the federal government. Letters of credit. Letters of credit, which hardrock operators typically purchase from a bank or other financial institution, require the institution to pay BLM or the designated state agency the value of the letter of credit if the purchaser does not complete the required reclamation. Depending on the financial condition of the operator, the financial institution may require a deposit or collateral. Letters of credit are used in nine states with hardrock operations. In seven of these states, BLM state offices rated letters of credit as “moderately effective” or “very effective” in minimizing losses to the federal government; in the other two states, the BLM state offices reported that they had no experience in using this type of assurance in minimizing losses to the federal government. Corporate guarantees. Corporate guarantees are promises by operators, sometimes accompanied by a test of financial stability, to pay reclamation costs, but do not require that funds be set aside to pay such costs. Although BLM prohibits new corporate guarantees in its 2001 regulations, 3 of the 12 states had existing corporate guarantees that were to cover almost one fourth of the total estimated reclamation costs, as of July 2004. Most of these corporate guarantees—$200 million of the approximately $204 million—are for operations in Nevada. The Nevada BLM state office rated corporate guarantees as “not effective” for minimizing losses to the federal government. Operators in Utah and Wyoming are also using corporate guarantees, although in relatively smaller amounts of $122,000 and $3.4 million, respectively. The Utah BLM state office reported that it has no experience in using this type of financial assurance to minimize losses to the federal government and therefore did not rate the effectiveness of this type of assurance. The Wyoming BLM state office rated corporate guarantees as a “very effective” financial assurance, although the office reported it had no experience with an operation that had this type of financial assurance and failed to reclaim the land. State bond pools. Operators in two states—Alaska and Nevada—use state bond pools to cover reclamation costs. According to Alaska BLM state office officials, all hardrock operators on BLM land in Alaska participate in the state bond pool. Operators in the Alaska bond pool do not develop individual cost estimates for reclaiming the land disturbed by their operations. The bond pool, administered by the Alaska Department of Natural Resources, had $1 million in reclamation funds as of July 2004. According to Alaska BLM state office officials, if the bond pool funds are not sufficient to cover reclamation costs, the state of Alaska has agreed to cover any additional costs. The Alaska BLM state office rated the bond pool as “effective” in minimizing financial losses to the federal government. The office also reported that to date no requests or claims have been initiated to use bond pool funds for reclamation because either BLM has successfully negotiated with the operators to have the operations reclaimed, or the operations are pending further action. The Nevada reclamation bond pool—which had about $1.2 million as of July 2004—is open to operators on BLM or private lands. The state’s Division of Minerals administers this pool that was designed to help smaller operations that may have difficulty securing other forms of financial assurances. The Nevada bond pool does not establish the amount of the assurance required for each operation; this is typically done by BLM for operations on BLM land. The maximum bond amount for a participant is $3 million. The Nevada BLM state office rated the state’s bond pool as “very effective” in minimizing financial losses but noted that the pool had not been used as of our July 2004 survey. Subsequently, the office told us that the bond pool was used for the first time in late 2004, when BLM requested funds from the pool to reclaim a hardrock operation. Certificates of deposit and savings accounts. Certificates of deposit and savings accounts can be used to guarantee reclamation costs but must not exceed the maximum amount insured by the Federal Deposit Insurance Corporation. Operators use certificates of deposit in 10 of the 12 states with hardrock operations. BLM state offices in 7 of these 10 states rated these assurances as “effective” or “very effective” in minimizing losses to the federal government. Another state office rated this type of assurances as “moderately effective” and noted that care must be given to ensure that BLM is the beneficiary of the certificate. In the other two states, the BLM state offices reported that they had no experience with this type of assurance in minimizing losses to the federal government. Operators in one state are using savings accounts, and the BLM rated savings accounts as “very effective” for minimizing losses to the federal government. Cash accounts. Operators provide cash to BLM to guarantee reclamation costs, and BLM must deposit and maintain this cash in a federal depository account of the U.S. Treasury. Operators in 10 of the 12 states with hardrock operations use cash accounts. BLM state offices in 8 of these 10 states rated cash as “very effective” for minimizing losses to the federal government. In the other two states, the offices reported that they had no experience with using this type of assurance to minimize losses to the federal government. Trust funds. The 2001 regulations require operators, when BLM identifies a need for it, to establish a trust fund or other funding mechanism to ensure the continuation of long-term treatment to achieve water quality standards and other long-term, post-mining requirements. Funds are placed in an interest-bearing trust account by an operator with BLM as the beneficiary. The trust account should accrue sufficient funds to be sustained in perpetuity. The Nevada BLM state office reported one trust fund with just over $1 million and said it did not have sufficient experience to determine the effectiveness of this type of assurance in minimizing losses to the federal government. Property. The Montana BLM state office reported that one operator has used $617,000 in property—consisting of 17 mining claims on private land owned by the operator—as a financial assurance. According to BLM state office officials, the operator pledged these properties as collateral. The Montana BLM state office reported that it had no experience using property to minimize losses to the federal government. We note that the revised federal regulations do not identify property as an acceptable type of financial assurance. Negotiable U.S. securities and bonds. Operators in two states—Arizona and Nevada—use negotiable U.S. securities. The Arizona BLM state office reported it had no experience in using this type of assurance to minimize losses to the federal government. The Nevada BLM state office rated this type of assurance as “effective.” The Idaho BLM state office reported that operators in the state use U.S. bonds to guarantee reclamation costs and that the state has no experience using bonds to minimize losses to the federal government. Although the $837 million in financial assurances that BLM reported is the most complete information available, we note that this total may not include all financial assurances for hardrock operations on BLM land. Some BLM state offices had difficulty determining the value of financial assurances for hardrock operations in their jurisdictions when designated state agencies hold these assurances. For example, the state offices reported the following: Washington. The Oregon BLM office did not provide the value of financial assurances for the 139 hardrock operations it identified in Washington state. California. The information the California BLM office provided may not be complete because some financial assurances may be held by California’s 58 county agencies, and the state office did not contact each county agency to complete our survey. Montana. The Montana BLM office does not track state-held financial assurances for hardrock operations on BLM land. BLM obtained information on these assurances for our survey from the state and reported that this information was not all inclusive but appeared to be reasonably accurate. See appendix II for the number of notice- and plan-level hardrock operations and associated financial assurances for each state identified by BLM state offices, as of July 2004. Existing financial assurances for reclaiming BLM land disturbed by hardrock operations may not fully cover future reclamation costs for the approximately 2,500 hardrock operations that BLM reported if operators do not complete required reclamation. The costs may not be fully covered because BLM reported that some of these operations do not have financial assurances, and some have no or outdated reclamation plans and/or cost estimates. BLM’s explanations for this lack of coverage indicate that some operators may not be complying with BLM requirements. As of July 2004, BLM state offices reported that some notice- or plan-level operations in 9 of the 12 states with existing hardrock operations did not have financial assurances. For example, BLM state offices reported that in five states (Arizona, California, Idaho, New Mexico, and Utah) more than 5 percent of both notice- and plan-level operations did not have financial assurances. All of the operations in two other states—Colorado and Wyoming—had financial assurances, and the Oregon BLM state office reported that all plan-level operations in Washington state had financial assurances, but the office did not know the percentage of notice-level hardrock operations without financial assurances in Washington state. Table 3 shows the number of notice- and plan-level hardrock operations and the percentage of these operations without financial assurances for each of the 12 states with existing hardrock operations. For the states in which BLM state offices indicated that less than 100 percent of their hardrock operations had financial assurances, we asked them to provide an explanation. While some of the explanations indicated that financial assurances are not yet required for some operations, such as those that are pending BLM acceptance or have not yet begun exploration or mining, others indicated that the operations may not be complying with BLM’s requirements. The following explanations provided by BLM state offices for the lack of financial assurances suggest that some operators may not be complying with applicable financial assurance requirements. Alaska. The operator failed to submit state bond pool fees on time. California. Some older operations may not have financial assurances. Idaho. The office could not find records of financial assurance for two plan-level operations. Nevada. Some operations have been terminated by the state bond pool, operators have gone bankrupt, or operations have been abandoned and the operator cannot be found. BLM state offices also reported that, as of July 2004, some hardrock operations on BLM land have no or outdated reclamation plans and/or reclamation cost estimates. Specifically, BLM state offices reported that some existing hardrock operations in 9 of the 12 states did not have reclamation plans and/or cost estimates. For example, BLM state offices reported that in three states (Arizona, California, and Utah) both types of operations (notice- and plan-level operations) were missing some reclamation plans and cost estimates. In addition, according to BLM state office officials, all hardrock operators on BLM land in Alaska currently participate in the Alaska bond pool and do not develop cost estimates. All of the operations in two other states—New Mexico and Wyoming—had both reclamation plans and cost estimates, and the Oregon BLM office reported that in Washington state all plan-level operations have reclamation plans and cost estimates, but it did not know the percentage of notice-level hardrock operations without plans and estimates. Table 4 shows the percentage of BLM’s notice- and plan-level hardrock operations without reclamation plans and cost estimates, as of July 2004. For the states in which BLM state offices reported that less than 100 percent of their operations had reclamation plans and/or cost estimates, we asked BLM to provide an explanation. All notice- and plan-level operations are required to have reclamation plans and cost estimates. The following explanations provided by BLM state offices for the lack of reclamation plans and/or cost estimates suggest that some operators may not be complying with financial assurance requirements. Arizona. Some of the older plan-level operations may still have financial assurances that were calculated on the basis of $2,000 per acre, which was the policy under previous federal regulations, rather than all of the estimated costs of reclamation as the 2001 regulations now require. Colorado. No reclamation plan was required when some of the notices were submitted. Idaho. A record of a cost estimate for two plans could not be found. Oregon. Not all of the notice-level operations have a reclamation plan because of a general backlog in updating reclamation plans, and reclamation cost estimates are still being developed in a few cases. In addition, three state offices reported that some reclamation plans and cost estimates had not been updated. For example, the California BLM state office reported that some of the older reclamation plans for operations in that state have not been updated because of a workload backlog and staff vacancies. Consequently, these plans and estimates may not provide a sound basis for establishing financial assurances to cover all future reclamation costs. Like our survey results, the results of the 2004 BLM survey of 18 of its 157 field offices showed that some hardrock mining operations under the jurisdiction of 7 field offices did not have financial assurances that met BLM’s requirements in fiscal year 2003. For example, one field office reported that it did not have financial assurances that met BLM’s requirements because none of the reclamation cost estimates for plan-level operations included indirect costs. Another field office had a backlog of nearly 80 plan-level operations that had not had their reclamation cost estimates updated because, among other things, the office did not have sufficiently trained staff to review updates. In yet another field office, higher priority work prevented timely updates of some reclamation cost estimates. BLM identified 48 hardrock operations on BLM land that had ceased and not been reclaimed by operators since it began requiring financial assurances. BLM reported that the most recent cost estimates for reclamation required by applicable plans and federal regulations for 43 of these operations totaled about $136 million, with no adjustment for inflation; it did not report reclamation cost estimates for the other 5 operations. However, as of July 2004, financial assurances had provided or were guaranteeing $69 million, and federal agencies and others had provided $10.6 million to pay estimated reclamation costs for the 48 operations, leaving $56.4 million of reclamation costs unfunded. In particular, financial assurances were not adequate to pay all estimated costs for required reclamation for 25 of the 48 operations because (1) some operations had no assurances, (2) some operations’ assurances were less than the most recent reclamation cost estimates, and (3) some financial assurance providers declared bankruptcy and could not pay. In addition, for about half of the remaining 23 operations, cost estimates may be understated because the cost estimates may not have been updated to reflect inflation or other factors that could increase reclamation costs. Furthermore, the $136 million cost estimate is understated to the extent that BLM did not identify or report information on all hardrock operations that had ceased and not been reclaimed by operators as required. Finally, according to BLM officials, required reclamation had been completed for only 5 of the 48 operations as of July 2004, but they believe it is likely that required reclamation will be completed for 28 of the remaining 43 operations. BLM identified 48 hardrock operations in seven states that had ceased and not been reclaimed by operators, as required by applicable reclamation plans and federal regulations, since it began requiring financial assurances. The number of operations BLM identified in each of the seven states, along with the primary minerals explored, mined, and/or processed, and the operating authority for the 48 operations are shown in table 5. Appendix III, table 14, contains additional information about these operations. According to BLM officials in each of the seven states, BLM had taken steps to compel operators of most of the 48 operations to reclaim BLM land. For example, it had sent notices of noncompliance (24 operations) and taken administrative, legal, or other actions (19 other operations), such as revoking plans of operations. BLM took no action to compel reclamation of the remaining five operations. However, none of the operators for these 48 operations completed reclamation, primarily because of bankruptcy (30 operations). Appendix III, table 16, details the actions BLM took to compel operators to complete reclamation and the reasons reclamation was not completed. BLM reported reclamation cost estimates for 43 of the 48 operations that had ceased and not been reclaimed by the operators; it did not report estimates for the other 5 operations—2 in Alaska, 2 in Nevada, and 1 in Arizona. The most recent estimates as of July 2004 indicated that the total reclamation cost for the 43 operations was about $136 million. Almost 99 percent of this estimated cost was associated with operations in Montana and Nevada—primarily for the Zortman and Landusky mining operation in Montana ($85 million) and the Paradise Peak operation ($21.2 million) and MacArthur Mine operation ($17 million) in Nevada. Clearly, the total cost estimate would be higher if the costs for the 5 operations with no estimates were included. The number of hardrock operations for which BLM reported cost estimates and the value of the most recent cost estimates, as of July 2004, for each of the seven states is shown in table 6. Appendix III, table 17, provides the reported estimates for each of the 43 operations. Financial assurances and funds provided by others were not adequate to pay all of the estimated $136 million needed to complete the required reclamation of the 43 operations for which BLM reported cost estimates. Surety bonds and other types of financial assurances had provided or were guaranteeing $69 million of the estimated costs for required reclamation that BLM reported for these operations, or about 51 percent. According to our analysis of information BLM officials provided in response to our survey, these funds were not adequate to pay all estimated costs for required reclamation for 25 of the 48 operations. Moreover, cost estimates may be understated for 12 of the other 23 operations. In addition, funds provided by federal agencies and others paid only a fraction of the estimated reclamation costs. As a result, at least $56.4 million, or about 41 percent, of the estimated $136 million needed for required reclamation was unfunded, as shown in figure 5. Finally, the $136 million cost estimate for required reclamation is understated to the extent that BLM did not identify or report information on all hardrock operations that had ceased and not been reclaimed, as required. Operators used a variety of types of financial assurances for 38 operations to pay or guarantee coverage of $74.2 million of the $136 million of estimated costs for required reclamation, as table 7 shows. (The remaining 10 operations had no financial assurances.) Operators used surety bonds, a trust fund, and corporate guarantees to guarantee almost 97 percent of these costs, with the rest guaranteed by state bond pools, letters of credit, certificates of deposit, cash, and a construction bond provided by an operator. However, as of July 2004, financial assurances had provided or were guaranteeing only $69 million, or almost 51 percent, of the reclamation costs. This amount decreased because $4.2 million in corporate guarantees had lost all their value when the operator that guaranteed the reclamation costs declared bankruptcy and had no funds to pay such costs, and $949,350 was not available from a surety bond because the financially-troubled financial assurance provider paid for reclamation instead of relinquishing the bond. See appendix III, table 18, for the types of financial assurances used for each hardrock operation. These 38 financial assurances provided or guaranteed funds for only about half of the estimated costs for required reclamation for the 48 hardrock operations. Specifically, these financial assurances were not adequate for 25 of the 48 operations because (1) operators did not provide financial assurances for 10 hardrock operations, (2) the financial assurances that were provided were less than the most recent cost estimates for 13 operations, and/or (3) the financial assurance providers declared bankruptcy and did not have the funds to pay all reclamation costs for two other operations. (Also, 2 of the 13 operations whose financial assurances were less than the most recent cost estimates went bankrupt.) Table 8 shows the reasons financial assurances were not adequate and the associated funding differential. Table 8 also shows that most of the difference between the value of the estimated reclamation costs and the value of the financial assurances occurred because the financial assurances were less than the most recent cost estimate. As table 8 shows, 10 hardrock operations had no financial assurances. These operations were located in Washington (2), Arizona (4), and Nevada (4). The most recent reclamation cost estimates for 9 of these 10 operations indicated that slightly over $2 million in reclamation costs was unfunded; BLM reported no cost estimate for the other operation. BLM officials provided the following explanations for why the 10 operations did not have the required financial assurances: Two operations in Washington. An official in Oregon’s BLM state office, which manages BLM programs in Oregon and Washington, said that two operations in Washington did not have financial assurances, probably because the responsible BLM field office did not have adequate staff to enforce compliance with this requirement. The official also said that financial assurance training had been a problem and that staff turnover in one field office meant that financial assurances were overlooked for a period of time. Four operations in Arizona. According to BLM state office officials, the operators of two operations did not provide financial assurances, even though BLM told them that financial assurances were required. According to an official in the BLM state office, the heavy workloads associated with other BLM programs dissuaded staff from taking enforcement actions that could involve time-consuming activities, such as obtaining court orders. Furthermore, the official said that case files indicated the third operation had financial assurances sometime during the 1990s, but information on the type and amount of financial assurances after it ceased could not be found. No reason was given for the fourth operation. Four operations in Nevada. According to BLM state office officials, operators of three operations did not provide financial assurances, even though BLM notified the operators that financial assurances were required. At one of these operations, for example, BLM’s field office issued a noncompliance order that, after the operator appealed it, was upheld by the BLM state office. BLM is currently working with the state of Nevada to reclaim this operation. BLM state office officials said that the operator of another operation, who eventually went bankrupt, was never able to provide a suitable financial assurance instrument. Regarding the fourth operation—Relief Canyon—officials in BLM’s responsible field office told us that the operator refused to provide financial assurances despite the field office’s enforcement steps. The field office issued a noncompliance order and took other enforcement actions, such as revoking the operator’s plan of operation. The Relief Canyon gold mine is located in north-central Nevada on about 344 acres, including 295 acres of BLM land. According to BLM officials, the mine was being reclaimed when a new operator purchased it in 1995 and, at that time, the agency advised the new operator of the need for financial assurances for all required reclamation—including past and future disturbances. However, the operator never obtained the financial assurances. According to BLM, the mine’s plan of operation was last updated in October 1996, and before the operation ceased, the operator estimated reclamation costs at about $889,000. BLM reported that, as of July 2004, 26 to 50 percent of the operation had been reclaimed. BLM officials told us that they had revoked the mine’s plan of operation, operations had ceased, and the operator should complete reclamation, but the operator has appealed this revocation to Interior’s Board of Land Appeals. The operator contends that he plans to either begin mining operations when he gets the funds or sell the operation. When we visited the operation in September 2004, we did not see any signs of ongoing mining activity and observed that buildings, collection pond liners, the security fence, and other structural facilities needed repair. As of June 2005, BLM was awaiting the board’s decision. As table 8 also shows, 13 operations had financial assurances that were less than the most recent cost estimates. These operations were located in Alaska (1), California (1), Montana (1), and Nevada (10). The most recent cost estimate for these 13 operations was $128.19 million, and the value of the associated financial assurances was $64.45 million, leaving $63.74 million of the estimated reclamation costs with no financial assurance coverage. Table 9 shows the most recent cost estimates, compared with the value of financial assurances for each of the 13 operations. Three mining operations—Zortman and Landusky, MacArthur Mine and Paradise Peak— accounted for about 95 percent of the amount that the cost estimates exceeded the financial assurances. For these 13 hardrock operations, we identified several reasons why financial assurances were less than the most recent reclamation cost estimate. In particular: Estimates at the time operations ceased for 6 of the 13 operations did not consider all costs. BLM reported that some estimates excluded BLM administrative or indirect costs, interim maintenance costs, long-term maintenance and monitoring costs, costs for inflation, and/or other costs. For example, estimates for five operations did not include sufficient funds to cover BLM administrative or indirect costs, which can be high, especially if BLM gets involved with bankruptcy procedures. In its guidance on preparing cost estimates BLM states that estimates should include (1) costs for contract administration, which should be between 6 and 10 percent of estimated operations and maintenance costs, depending on the size of the operation, and (2) indirect costs, which should be 21 percent of the contract administration costs. One operator intentionally understated reclamation costs for an operation to minimize the amount of financial assurances required, according to BLM field office officials in Nevada. They said, for example, that the operator calculated the estimate as if very large equipment were going to be used, which would reduce costs; however, the operator did not have such equipment available in the state. The field office officials said that the BLM staff who reviewed the cost estimate were inexperienced and did not detect the understatement. Reclamation plans and cost estimates sometimes were not updated to reflect all reclamation costs when the scope of the plan of operations changed and, as a result, the reclamation requirements changed. For example, BLM reported that the amount of financial assurances for the Zortman and Landusky mining operation in Montana was significantly less than the cost estimate prepared after the operations ceased. The difference in costs was due in part to the failure to update the reclamation plan to address acid rock drainage found during an inspection in the early 1990s, despite efforts by the operator to update the plan. Specifically, the most recent cost estimate for water treatment is greater than the estimate prepared before operations ceased. In addition, the cost estimate increased because the revised reclamation plan required more extensive work on the heap-leach pad than in the earlier plan. Approval of the plan was delayed until 2002 by the review process and litigation over the effects of the proposed changes, and by that time the operator had declared bankruptcy. According to the Montana Department of Environmental Quality, which jointly manages the hardrock operation with BLM, the value of the financial assurances increased during this period. However, the most recent reclamation cost estimate was still greater than the associated financial assurances. An estimate of $85.2 million for reclamation costs was prepared after operations ceased and addressed water contamination and other reclamation activities, such as backfilling, regrading, and revegetating. This estimate included $36.3 million for earthworks, $22 million for water treatment through 2017, and $26.9 million for long-term water monitoring and treatment, according to BLM field office officials. This estimate was $27.4 million more than the $57.8 million in financial assurances provided for the reclamation. The financial assurances consisted of $29.6 million in surety bonds for earthworks, a $2 million construction assurance bond for water treatment facilities, $13.9 million in surety bonds for water treatment through 2017, and $12.3 million in a trust fund for long-term water treatment and monitoring. Part of the funding shortfall—about $8.7 million—was covered with funds from other sources. For four operations in Nevada, as table 8 shows, financial assurances were not adequate because financial assurance providers went bankrupt and could not pay all the reclamation costs they guaranteed. For three of these operations—Paradise Peak, County Line, and MacArthur Mine—an operator used corporate guarantees totaling $4.2 million to guarantee part of the estimated reclamation costs. However, these corporate guarantees lost all their value when the operator went bankrupt. Reclamation costs for the fourth operation were guaranteed with a surety bond underwritten by a company that went bankrupt and spent $850,650 for partial reclamation of the operation instead of relinquishing the $1.8 million surety bond. In particular: Paradise Peak, a mining operation in central Nevada, used heap leaching to extract gold from ore. When the operation ceased, it covered almost 1,000 acres, about half of which was on BLM land. The plan of operation was last updated in May 1996, and in November 1995, the operator estimated that reclamation costs would be $5,462,000. The operator, Arimetco Inc., provided financial assurances totaling $4,625,000— $1,157,000 in a surety bond and $3,468,000 in a corporate guarantee that lost all of its value when Arimetco went bankrupt. As of July 2004, the surety bond company had relinquished the $1,157,000, but none of the funds had been spent. BLM reported that estimated reclamation costs were $21,157,000—$20 million more than the funds the surety bond company relinquished. This estimated cost is significantly more than the original estimate, according to BLM state office officials, because the original estimate did not include all costs that it should have, such as costs for reclaiming collection ponds, and because the cost estimate was not updated to reflect changes in the reclamation plan. BLM reported that no reclamation had been done as of July 2004, but it was very likely that reclamation would be completed because a portion of the needed funding was obtained through bankruptcy procedures and BLM was working with the operator to perform reclamation. County Line Project, located on 130 acres of BLM land in western Nevada, used heap leaching to extract gold from ore. The plan of operation was last updated in January 1992, when the operator estimated that reclamation costs would be about $837,000. BLM reported no more recent reclamation cost estimates. Arimetco Inc., the operator, provided $838,000 in financial assurances—$210,000 in surety bonds and $628,000 in a corporate guarantee that lost all of its value after Arimetco went bankrupt. As of July 2004, the surety bond company had relinquished the $210,000, but none of the funds had been spent. BLM reported that, as of July 2004, between 26 percent and 50 percent of the operation had been reclaimed. BLM also reported that it was very unlikely that reclamation would ever be completed because it was unlikely that the operator would remain viable after bankruptcy. The MacArthur Mine covers about 550 acres, over three-quarters of which are on BLM land. The MacArthur Mine was purchased by Arimetco in 1988. This copper mine consisted of a pit, waste dump, and roads used to haul ore from the pit to three heap-leach pads that Arimetco constructed on the nearby Yerington Mine, which was also on BLM land, to extract copper from the MacArthur ore. BLM reported that Arimetco began operating the MacArthur Mine in 1992 and ceased operations in 1997, after it filed for bankruptcy. BLM also reported that the plan of operation was last updated in 1995 and that Arimetco had no reclamation cost estimate before operations ceased. Further, BLM provided documents that showed the MacArthur reclamation plan covered not only the MacArthur land but also the heap-leach pads at the Yerington Mine. Although Arimetco had no cost estimate, it did have $184,300 in financial assurances—$47,000 in a surety bond and $137,300 in a corporate guarantee that had lost all of its value when Arimetco went bankrupt. BLM reported that, as of July 2004, the $47,000 in surety bond funds had been relinquished but not spent. BLM also reported that estimated reclamation costs would be $17,047,000—$17 million more than the funds relinquished by the surety bond company. This estimate, according to an official in a BLM Nevada field office, was prepared by the state of Nevada for bankruptcy procedures. BLM reported that, as of July 2004, no reclamation of the MacArthur operation had been undertaken or completed and that it was very unlikely reclamation of this operation would occur. However, in March 2005, the BLM official told us that the Yerington Mine, including the leach heaps built and used by Arimetco for the MacArthur operation, would be cleaned up under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended (CERCLA). CERCLA governs cleanup of severely contaminated hazardous waste sites. The Olinghouse Mine operation, a exploration and mining operation in northwest Nevada, used heap leaching to extract gold from ore on 502 acres, of which 447 acres were BLM land. The plan of operation was last updated in September 2002, and the operator estimated that reclamation costs would be about $851,000. BLM has not reported any more recent cost estimates. Alta Gold Company, the operator of the Olinghouse operation and eight other hardrock operations in Nevada, provided financial assurances to guarantee reclamation of all nine operations through a statewide surety bond underwritten by the Frontier Insurance Company (Frontier). In April 1999, Alta Gold Company filed for bankruptcy, and BLM gave Frontier the option of paying or performing reclamation. Subsequently, the insurance company filed for bankruptcy and was put into “rehabilitation”—a term for bankruptcy with the intent of making the company solvent. In October 2001, Frontier offered to reclaim the operation to a “satisfactory level.” According to BLM, its options were to (1) wait upon the bankruptcy court, with no guarantee to obtain funds or (2) find an alternative solution to reclaim most of the land. BLM entered into an agreement with Frontier for it to perform reclamation using contractors, with BLM oversight. Frontier completed the agreed-upon reclamation by February 2003, and in December 2003, BLM released the company from future financial obligations for this operation. Frontier performed the reclamation for $850,650, which was significantly less than the $1.8 million surety bond that it would have relinquished if Frontier had not performed the reclamation. BLM state and field office officials told us that this solution was satisfactory to all parties, even though all reclamation required by the reclamation plan was not completed. BLM reported that, as of July 2004, 86 to 95 percent of the reclamation had been completed, but it was very unlikely that the remaining reclamation would ever be completed. For example, BLM reported that all exploration roads were not reclaimed. Financial assurances may not be adequate to pay all costs for required reclamation for 12 of the other 23 operations—11 for operations where financial assurances were equal to the associated cost estimates and 1 where the financial assurance was greater than associated cost estimate. The financial assurances may not be adequate because the cost estimates on which they were based were prepared before operations ceased—in some cases, as long as a decade ago—and likely do not reflect inflation or other factors that would cause reclamation costs to increase. Table 10 shows the value of the cost estimate prepared before the operations ceased and the number of months elapsed between that time and July 2004, when our surveys were completed. Because reclamation costs can be influenced by many factors, we did not attempt to project the amount that the cost estimates prepared before operations ceased were likely to be less than the amount currently needed to complete reclamation. However, BLM’s past experience with reclamation costs indicates that cost estimates prepared after operations ceased likely will be higher than cost estimates prepared before operations ceased. Specifically, BLM updated cost estimates for 16 of the 43 operations for which cost estimates had been prepared before operations ceased, and those updated estimates were the same for 2, lower for 2, and higher for 12 operations. The increases in BLM’s 12 higher estimates totaled about $35.5 million, or about a 47 percent increase over the estimates before operations ceased, and ranged from $690 to $16.7 million per hardrock operation, while the decreases in BLM’s 2 lower estimates totaled $10,497, or about a 33 percent decrease, and were $6,000 and $4,497 for the two hardrock operations. As of July 2004, BLM reported that federal agencies and others had provided about $10.6 million to help reclaim 11 operations. These funds accounted for about 8 percent of the estimated $136 million needed to pay for required reclamation for operations identified by BLM as ceased and not reclaimed by operators. The sources and amounts of funds provided by others are shown in figure 8. Appendix III, table 19, shows the other sources of funds for the 48 operations. BLM headquarters provided over $6.7 million to reclaim 10 operations. Nearly all of this amount—$5,594,500—was for the Zortman and Landusky mining operation in Montana. Officials in Montana’s Lewistown field office told us that most of these funds came from BLM’s Abandoned Mine Land Program and were used to remove leach pads and tailings, backfill pits, and treat water. BLM headquarters officials told us that some of the funds used to reclaim the 10 operations were special funds that became available on a one-time basis as the result of a GAO report. In March 2001, we reported that BLM had improperly used Mining Law Administration Program funds for purposes other than intended by that program and recommended that BLM correct the improper charges. BLM made the corrections and, according to BLM headquarters officials, used some of the funds for reclamation. The U.S. Army Corps of Engineers (the Corps) provided about $0.8 million to reclaim two operations through its Restoration of Abandoned Mines Sites (RAMS) program, according to BLM. The RAMS program, created in 1999, allows the Secretary of the Army to provide assistance to federal and nonfederal entities for projects to address water quality problems caused by drainage and related activities from inactive and abandoned noncoal mines, such as hardrock operations. Specifically, BLM reported that the Corps provided $171,000 to reclaim the Easy Jr Mine located near Ely, Nevada. These funds were used for a site characterization study and for construction to close the operation, with the primary goal of recontouring and reclaiming a heap-leach pad. In addition, the Corps provided $600,000 to reclaim the Golden Butte Mine, which is also located near Ely, Nevada. This project included collecting and analyzing water data, characterizing the leach pad, and developing a closure plan. The Corps also partnered with BLM through the RAMS program on another operation that had ceased and not been reclaimed by the operator—the Elder Creek operation located near Battle Mountain, Nevada. BLM told us that, as of July 2004, the Corps had provided all of the funds to develop the engineering closure design for this project, but BLM did not identify the amount of funds provided. Funds to reclaim the Zortman and Landusky mining operation also were provided from other sources, according to BLM. Through a bankruptcy procedure, the bankrupt operator provided $1,050,000 to help reclaim the operation. The Environmental Protection Agency provided $340,000 in grant funds, primarily to prepare a supplemental environmental impact statement. Finally, the Montana Department of Environmental Quality provided $1,697,000 for reclamation activities, such as studies, sampling, tailings removal, water treatment, and monitoring. The status of reclamation in 1993 and 2004 for the Zortman and Landusky mining operations is shown below. Description of Zortman and Landusky Mine The Zortman and Landusky Mine is located in north-central Montana on about 1,200 acres, half of which are on BLM land. The operation, originally permitted in the 1970s, was the first large open-pit gold mine to use heap leaching in the United States. BLM reported that the operation began under a BLM-approved plan of operation in 1981 and ceased in 1999 after Pegasus Gold, the parent company, went bankrupt. BLM reported that, as of July 2004, over 85 percent of the required reclamation had been done and that complete reclamation is very likely. The $136 million estimate of costs for required reclamation for hardrock operations that had ceased and not been reclaimed by the operators as required is understated to the extent that BLM did not identify or report information on all such operations. For example, officials in Oregon’s BLM state office estimated that 20 notice-level operations in Washington state met these criteria, but neither the Oregon BLM state office nor its field offices completed our surveys for any of these operations. State office officials did not explain why surveys had not been completed for these notice-level operations. Clearly, the $136 million estimate would be higher if BLM’s state or field offices had reported this information. Furthermore, some other BLM offices had difficulty identifying operations that met our criteria and may not have identified all such operations. For example, Nevada’s BLM state office completed additional hardrock operation surveys after we questioned whether they had identified all the operations that met the criteria. For more detailed information on the difficulties in identifying hardrock operations that met our criteria, see our scope and methodology in appendix I. BLM reported that, as of July 2004, required reclamation had been completed for 5 of the 48 hardrock operations on BLM land that had ceased and not been reclaimed by operators since it began requiring financial assurances, and it expects to complete reclamation for most of the remaining operations. BLM reported that the reclamation status was in various stages or unknown for the 43 operations that had not completed reclamation. BLM officials’ views on the likelihood of completing required reclamation for these operations varied, but they believed that 28 of the 43 operations are likely to be reclaimed, as shown in table 11. Appendix III, table 19, shows the status and likelihood of completing reclamation for the 48 operations. Required reclamation of the five operations that were fully completed was accomplished with funds from several sources. For three of the five operations, financial assurances were sufficient to cover the costs to complete reclamation, including one for which the operator did some reclamation and negotiated with BLM to have BLM do the remaining reclamation. For the other two operations, BLM paid at least part of the reclamation costs. Specifically, BLM spent $92,000 to reclaim one operation that had no financial assurances, and spent $15,000 to reclaim another operation whose financial assurance was less than the most recent reclamation cost estimate. In the latter case, the operator agreed to abandon the claim if BLM did the reclamation; the operation was in a wild and scenic river canyon in California. BLM officials generally believed that required reclamation would be completed for most of the 43 operations that had not been reclaimed by the operators as of July 2004. They reported that required reclamation was somewhat or very likely for 28, or almost two-thirds of the 43 operations. Some BLM officials believed reclamation would be completed because funds were available from financial assurances or other sources. For example, BLM reported that completion was very likely for the Zortman and Landusky mining operation in Montana, which was between 86 and 95 percent reclaimed as of July 2004, partly because funds for earthwork were available and work was under way. At the same time, BLM noted that more than $18 million in additional funds would be needed to maintain water treatment at the operation in perpetuity. In other cases, officials believed that operations may be taken over by new operators, or reopened by the existing operators, who will ultimately complete reclamation of the operations. For example, BLM reported that completing reclamation of an operation in Alaska that was less than 50 percent reclaimed was very likely because another operator agreed to reclaim the area in conjunction with taking over the operation from the bankrupt operator. Conversely, BLM reported that completing required reclamation was somewhat or very unlikely for nine operations, most of which had less than 50 percent of required reclamation completed as of July 2004. BLM said that the operators of several of these operations could not do the required reclamation, usually because they lacked funds. BLM’s LR2000 is not reliable and sufficient for managing financial assurances to cover reclamation costs for BLM land disturbed by hardrock operations because staff do not always update information, and LR2000 is not currently designed to track certain critical information. Specifically, staff have not entered information on every hardrock operation and, for those hardrock operations included in LR2000, information is not always current. In addition, the system does not track some information on hardrock operations and their associated financial assurances, which we believe is critical for effectively managing financial assurances. This information includes the basic status of operations, some types of allowable financial assurances, and state- and county-held financial assurances. Given these limitations, it is not surprising that BLM’s reliance on LR2000 to manage financial assurances is mixed. In part to compensate for LR2000 limitations, some BLM offices use informal record-keeping systems to help manage financial assurances. BLM has taken some steps and identified others to improve LR2000 for managing financial assurances for hardrock operations. Information in LR2000 is not reliable and sufficient because staff do not always update the information, and the system is not currently designed to track critical information. Specifically, some hardrock operations are not in LR2000: In Nevada—the state with the largest number of hardrock operations— LR2000 does not contain information on all hardrock operations that a state BLM official’s informal records show. When Nevada officials queried LR2000 during our visit, the system showed 248 plan-level operations in the state. However, according to a senior Nevada BLM state office official who keeps informal records of the hardrock operations, some of the operations are not in LR2000; his records contain 300 plan-level operations. According to BLM state and field office officials, some operations are not in the system because some data were lost during the conversion from an earlier information system to LR2000 in 1999. Officials in one Nevada field office told us that they have not had time to reenter some of the lost data but plan to do so in the future. Alaska—with 240 hardrock operations—does not use LR2000 to record information on these operations. Instead, BLM state office officials told us that they use the Alaska Land Information System (ALIS) because LR2000 cannot be used to meet the office’s other needs. That is, LR2000 cannot process the conveyance of land from the federal government to the state of Alaska and to Native villages and corporations. In addition, the costs and staff time associated with incorporating the information in ALIS into LR2000 contributed to BLM’s decision to continue to use ALIS. In BLM’s March 2004 assessment of 18 of its 157 field offices’ compliance with current hardrock regulations, 3 of the 18 offices reported that all hardrock operations were not recorded in LR2000. For example, one of these field offices reported that its office had only recently received training on LR2000. Furthermore, for some operations that are in LR2000, information is not up to date. For example, in responding to our survey regarding the number of existing notice- and plan-level hardrock operations with financial assurances, the New Mexico state office explained that some of its existing operations without financial assurances may be inactive and should be closed in LR2000. BLM officials are to open a case in LR2000 when a notice or plan of operation is received, and they are to close the case in LR2000 when operations have ceased and reclamation is complete. However, BLM state and field office officials reported that data entry is not always timely. For example, some field office officials told us that they do not enter data until the winter, when it is more difficult to work in the field and they spend more time in the office. In addition, in BLM’s March 2004 assessment, 11 of the 18 field offices reported that the results of compliance inspections were not entered in a timely manner. These inspections are critical to ensuring that all hardrock operations are meeting federal requirements. The field offices explained that this problem occurred because of other office priorities, lack of staff trained to use LR2000, and staff workload. In addition, the BLM officials who administer LR2000 said the quality of the data currently in LR2000 varied in part because of the varied emphasis the field offices gave to data entry. LR2000 also does not track some critical information on hardrock operations and their associated financial assurances. In particular, LR2000 does not track the following: The status of hardrock operations, such as whether the operation is ongoing or has ceased and should be reclaimed. LR2000 uses the term “open” to identify both operations that are ongoing and operations that have ceased and should be reclaimed. It uses the term “closed” to refer to those operations where reclamation has been completed. While field staff should know whether an operation is ongoing or has ceased because of first-hand knowledge or access to case files in their offices, BLM headquarters and state office officials do not have ready access to this basic information. For example, in response to our survey regarding the number of ongoing hardrock operations with financial assurances, the Arizona state office reported that only 32 of 55 plan-level operations had financial assurances. The office also reported that it was reviewing its case files to determine the status of the operations without financial assurances, such as whether any of these operations have ceased, been reclaimed, and should have been closed in LR2000. Also, in response to our survey, the California state office reported that LR2000 showed 639 “open” hardrock operations in the state, but officials estimated that only 303 of these operations were actually ongoing. Furthermore, for 9 of the 13 states with hardrock operations, BLM state offices reported that they did not track the status of reclamation where operators had failed to do required reclamation using LR2000 or other means. Information on all types of financial assurances allowed under federal regulations. LR2000 has data entry fields for five of the allowed types of assurances—surety bonds, letters of credit, certificates of deposit, cash, and treasury securities—as well as a “personal” field. However, some of the missing types of financial assurances, such as corporate guarantees, bond pools, and trust funds, are being used to guarantee reclamation costs. For example, corporate guarantees covered $204 million in reclamation costs, or 24 percent of the total value of financial assurances that BLM reported as of July 2004. To overcome this system limitation, the Nevada BLM state office uses the “personal” field to track information on both corporate guarantees and operations covered by the state bond pool. Without the capability to track all types of financial assurances, BLM cannot identify the total amount of reclamation costs that each type of financial assurance guarantees. Information on financial assurances held by the state or county agencies. Several BLM state offices reported that some financial assurances for hardrock operations on BLM land are held by state or county agencies and are not included in LR2000. For example, the Montana BLM state office contacted the Montana Department of Environmental Quality to obtain information on the types and amounts of financial assurances. The Idaho office reported that it relies on its own informal records to track state-held financial assurances and provided the information. In California, where county agencies can hold the financial assurances for hardrock operations on BLM land, the office reported that it does not have information on all financial assurances held by the counties and did not contact them to provide it. In commenting on a draft of this report, Interior stated that BLM issued an instruction memorandum in April 2005 to provide guidance and direction on data standards for LR2000. The instruction memorandum states that BLM data entry staff must use a specific action code when financial assurances are filed and instructs the staff to use that action code when BLM receives documentation that a financial assurance is held by another agency. Given LR2000’s limitations, it is not surprising that BLM’s reliance on the system to manage financial assurances is mixed. At the headquarters level, BLM does not always rely on information in LR2000. Rather, to obtain information needed on hardrock operations and associated financial assurances, BLM headquarters officials must contact their state and field offices. For example, because the information was not in LR2000, in March 2003, BLM headquarters requested information from its state and field offices on the number of notice-level operations that (1) did not meet the required deadline to request an extension, (2) requested an extension, and (3) were extended under the 2001 regulations. BLM needed this information to determine if all notice-level operations were in compliance with current regulations. Furthermore, BLM headquarters does not always rely on LR2000 to answer questions on financial assurances at a national or state level from the Congress, the public, and other interested parties. For example, BLM headquarters could not provide information on hardrock operations and financial assurances in response to our request for such information and told us we would have to get this information from the state and field offices. State offices told us that some of the critical information, such as the status of the hardrock operation and reclamation cost estimates needed to determine the adequacy of the financial assurances, is in paper case files located in the field offices. Others also have found that BLM does not systematically use LR2000 to track information on hardrock operations. For example, in its 1999 report on hardrock mining, the National Research Council found no systematic, easily available compilation and analysis of information about hardrock operations on BLM land. At the state- and field office-levels, BLM’s reliance on LR2000 for managing financial assurances for hardrock operations varies. BLM state offices reported that in four states with hardrock operations LR2000 was relied on to little or no extent; in eight states, to a moderate or some extent; and in one state—Nevada—to a very great extent. Of the four BLM state offices reporting little or no reliance on LR2000, two explained that there is no BLM state office oversight of the program; one defers program responsibility to the state agency; and one has few hardrock operations. The lack of reliance on LR2000 for managing financial assurances is due in part to state office concerns about the reliability and adequacy of information in the system. For example, as discussed earlier, some BLM state offices do not use LR2000 because it does not contain information on financial assurances held by state or county agencies. States’ views on the reliability and adequacy of LR2000 are shown in table 12. Some BLM offices reported using informal record-keeping systems or records to track information on hardrock operations and associated financial assurances within their jurisdiction. For example: In Alaska, the field offices use an Alaska state agency database to obtain information on the number of existing notice- and plan-level hardrock operations. The New Mexico BLM state office has an informal database that lists all financial assurances filed and approved to track financial assurance information in the state. The Nevada BLM state office uses field offices’ logs and the Nevada state database to track information on hardrock operations. The Idaho BLM state office maintains informal records on state-held financial assurances. According to agency officials, BLM has taken some steps to improve the information in LR2000 and is planning others. Specifically, BLM reported the following actions: Developing revised data standards for LR2000, which have not been updated since the 1990s. These standards set forth the type and format of information that must be entered into LR2000. Officials are considering expanding information on the status of hardrock operations in the system to show whether operations have been abandoned and the type of activity associated with the operation, such as mining and road construction. In commenting on a draft of this report, Interior stated that BLM’s April 2005 instruction memorandum provided guidance on action codes to track the length of time between submission and approval of hardrock plans of operation. Planning to add an additional report to LR2000 so that BLM officials can directly compare information on hardrock operations with their associated financial assurances. The creation of this report was prompted by a request from the Nevada BLM state office for this information. Reengineering LR2000 to better reflect the way BLM does business so that officials will have better management information. Officials said that while progress has been made on this effort with some other BLM programs, such as oil and gas, reengineering BLM’s data management for hardrock operations is planned for the future. BLM state offices also identified some changes to LR2000 that could help them better manage financial assurances for hardrock operations. These changes included ensuring the codes in LR2000 match the on-the-ground conditions of operations; changing it to better identify critical information on financial assurances, such as those held by state and county agencies; and enhancing its capability to notify BLM officials when it is time to review financial assurance amounts. According to BLM officials responsible for administering LR2000, the system has the capacity to handle virtually any changes that the state and field offices request. In commenting on a draft of this report, Interior stated that BLM will continue to refine and enhance LR2000 data systems as needed to facilitate the hardrock mining program. Having adequate financial assurances to pay reclamation costs for BLM land disturbed by hardrock operations is critical to ensuring that the land is reclaimed if operators fail to complete reclamation as required. Furthermore, financial assurances must be based on sound reclamation plans and current cost estimates so that BLM can be confident that financial assurances will fully cover reclamation costs. For years, BLM headquarters has relied on BLM state offices that, in turn, rely on BLM field offices and sometimes on state and county agencies to obtain adequate financial assurances. However, while federal regulations and BLM guidance set forth financial assurance requirements for notice- and plan-level hardrock mining operations, BLM does not have a process for ensuring that the regulations and guidance are effectively implemented to ensure that adequate financial assurances are actually in place, as required. Moreover, BLM does not know whether all hardrock operations have adequate financial assurances because of limitations in the types of information collected in LR2000 and failure of staff to update information in a timely manner. Specifically, LR2000 does not track the status of hardrock operations, whether each existing operation that requires a financial assurance has the assurance, and whether the financial assurance is adequate to pay the cost of required reclamation. Because BLM does not have an effective management process and critical management information, it has not ensured that some current and previous operators have adequate financial assurances, as required by federal regulations and/or BLM guidance. Furthermore, some operations either do not have any, or have outdated reclamation plans and/or cost estimates. When operators without any financial assurances, or with inadequate financial assurances, fail to reclaim BLM land disturbed by their hardrock operations, BLM is left with public land that requires tens of millions of dollars to reclaim and poses risks to the environment and public health and safety. Until BLM establishes monitoring and accountability mechanisms to ensure that all operations have required financial assurances—based on sound reclamation plans and current cost estimate—and improves the information it collects to effectively manage financial assurances, these problems will continue. To ensure that hardrock operations on BLM land have adequate financial assurances, we recommend that the Secretary of the Interior direct the Director of BLM to take the following two actions: require the BLM state office directors to establish an action plan for ensuring that operators of hardrock operations have required financial assurances and that the financial assurances are based on sound reclamation plans and current cost estimates, so that they are adequate to pay all of the estimated costs of required reclamation if operators fail to complete the reclamation, and modify LR2000 to ensure that it tracks critical information on hardrock operations and associated financial assurances so that BLM headquarters and state offices can effectively manage financial assurances nationwide to ensure regulatory requirements are met. We received written comments on a draft of this report from the Department of the Interior. Interior stated that it appreciated the advice and critical assessment we provided on BLM’s management of financial assurances required for hardrock operations. However, Interior did not acknowledge or address specific deficiencies identified in our report and did not concur with our recommendations or the conclusions upon which the recommendations were based. In commenting on our recommendation to establish an action plan for ensuring that operators of hardrock operations have required financial assurances, Interior stated that existing procedures and policies ensure financial guarantees are in place to protect the public should an operator fail to reclaim. We disagree and believe that Interior’s view is inconsistent with the evidence we developed based on information provided by BLM’s own offices. While we agree that existing federal regulations and BLM guidance require financial assurances to cover all reclamation costs for notice- and plan-level hardrock operations, the evidence in our report shows that notices and plans of operation do not always have adequate financial assurances, as required. As we stated in this report, BLM state offices with existing hardrock operations informed us that, as of July 2004, some notice- and/or plan-level operations did not have adequate financial assurances. Furthermore, the evidence is clear that hardrock operations have ceased without operators having the adequate financial assurances required by regulations and BLM guidance. As a result, funds are not available to pay at least $56.4 million in reclamation costs for operations that had ceased and not been reclaimed since BLM began requiring financial assurances. We continue to believe that this evidence clearly calls for a plan of action that includes monitoring and accountability mechanisms to ensure that the requirements in the federal regulations and BLM guidance to have adequate financial assurances are met. In commenting on our recommendation to modify LR2000 to ensure that it tracks critical information on hardrock operations and associated financial assurances, Interior stated that BLM does track all critical information on authorized operations in LR2000. Again, we disagree with BLM’s opinion and find this view troubling when viewed in the context of clear evidence to the contrary presented in this report. As we reported, LR2000 does not track the critical information needed to effectively manage and oversee financial assurances, including the operation’s basic status, such as whether the operation is ongoing or has ceased and should be reclaimed; some types of financial assurances being used, such as corporate guarantees, bond pools, and trust funds; and the adequacy of financial assurances to pay the cost of required reclamation. We are encouraged by BLM’s April 2005 instruction memorandum to provide guidance and direction on data standards for LR2000 and the recent addition of codes and edits to LR2000 for plans of operations and financial guarantees, and we have added information to our report, as appropriate. We are also encouraged by BLM’s willingness to refine and enhance LR2000. However, we continue to believe that until BLM timely enters, tracks, and uses this critical information it will not be able to effectively manage financial assurances to ensure that federal regulations and BLM guidance are followed. Interior also suggested some technical changes that we have incorporated as appropriate. Interior’s letter is included in appendix IV, along with our comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies to other appropriate congressional committees and to the Secretary of the Interior. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix details the methods we used to examine three aspects of financial assurances used to cover reclamation costs for the Department of the Interior’s Bureau of Land Management (BLM) land disturbed by hardrock exploration, mining, and processing operations. Specifically, we were asked to determine the (1) types, amount, and coverage of financial assurances operators currently use to guarantee reclamation costs; (2) amount that financial assurance providers and others have paid to reclaim operations that had ceased and not been reclaimed since BLM began requiring financial assurances and the estimated costs of completing reclamation for such operations; and (3) reliability and sufficiency of BLM’s automated LR2000 information system for managing financial assurances for hardrock operations. To address these objectives, we designed two surveys to obtain information from BLM’s state and field offices because they maintain the case files and other specific information on hardrock operations. We asked the 12 BLM state offices that manage BLM programs across the United States to complete surveys for each state in their jurisdiction with hardrock operations. The 12 BLM state offices were Alaska, Arizona, California, Colorado, Idaho, Montana, New Mexico, Nevada, Oregon, Utah, Wyoming, and Eastern States. We used the first survey, which focused on states’ experiences with hardrock operations, to determine the types and amounts of financial assurances currently used to guarantee reclamation costs. Specifically, we asked the 12 BLM state offices to provide information on (1) the number of existing hardrock operations for each state within their jurisdiction, (2) the types and the amounts of financial assurances provided for existing hardrock operations in each state, (3) their views on the effectiveness of the various types of financial assurances, (4) their views on the reliability and sufficiency of hardrock operation data contained in the LR2000, and (5) their use of LR2000 for managing hardrock operations in their states. We used the second survey, which focused on selected hardrock operations, to determine the amount of funds provided by financial assurances and others to reclaim hardrock operations that had ceased and not been reclaimed by operators since BLM began requiring financial assurances and the estimated costs of completing reclamation of such operations. We asked the state offices to provide detailed information on each hardrock operation within their jurisdiction that met both of the following criteria: the operator (1) ceased operations after the requirement for financial assurances went into effect—August 1990 for plan-level operations, January 2001 for new notice-level operations, and January 2003 for existing notice-level operations—and (2) failed to complete the required reclamation. In most cases, BLM field office staff completed this survey because hardrock operation case files are maintained in these offices. Also, as necessary, we obtained information from BLM state and field staff to clarify responses to the survey. We used the information obtained to determine the estimated reclamation costs and the adequacy of financial assurances for reclaiming the hardrock operations that BLM identified as meeting our criteria. To determine the adequacy of financial assurances, we compared the most recent complete reclamation cost estimate that BLM reported for each operation with the dollar value of the financial assurance that BLM reported for that operation. We then computed the difference between the most recent cost estimate and the value of the financial assurance to determine the total net excess or deficiency of the financial assurances. The total is the sum of the differences between the values of the financial assurances and the cost estimates that were made at different times over the past 15 years and were not adjusted for inflation. For each operation, we asked BLM to report the value of the (1) estimates that the operator had before operations ceased, (2) estimates that BLM prepared after operations ceased, (3) actual reclamation costs, (4) BLM’s estimate of the shortfall in funds needed to complete reclamation in excess of funds relinquished by the financial assurance provider, and (5) BLM’s estimates of funds needed to complete required reclamation. BLM reported one or more of these values for 43 operations, and no value for the other 5 operations. For 24 of these 43 operations, BLM reported only one value, and we used that value as the most recent reclamation cost estimate. For the other 19 operations, BLM reported two or more values. In determining which value to use for our analysis, we generally did not use the (1) actual costs for operations that were not fully reclaimed because the actual cost could not be known unless reclamation was complete and (2) estimated funds needed to complete reclamation for operations that were partly reclaimed because those estimates did not include funds that had already been spent. We used the following values as the most recent reclamation cost estimate for these 19 operations. For 12 operations, we used BLM’s estimate prepared after operations ceased because those estimates were the most recent. For three operations that BLM reported as having no reclamation completed or not knowing the status of reclamation, we used BLM’s reported estimate of funds needed to complete required reclamation. For one operation that BLM reported as being fully reclaimed, we used BLM’s reported actual cost. For one operation, we used BLM’s estimate of the shortfall of funds needed in excess of funds relinquished by the financial assurance provider because that estimate was the most recent and most accurate, according to BLM officials. For one operation, we used the estimate available before operations ceased because the only other value reported for the operation was BLM’s estimate of funds needed to complete reclamation and reclamation was only partly completed. For one operation, we used the estimate available before operations ceased because the other values reported for the operation were BLM’s estimate of funds needed to complete reclamation and the reported amount of actual costs, but reclamation was only partly completed. We provided a copy of these two surveys to BLM headquarters and incorporated officials’ comments as appropriate. We also pretested these surveys with state and field office staff in Nevada, Utah, and Arizona and made changes in the surveys’ scope and content as appropriate. Further, after respondents submitted their answers, we (1) verified the information in the survey that focused on states’ hardrock operations experience through discussions with BLM officials in two state offices with extensive financial assurance experience in hardrock operations—Nevada and Montana—and (2) verified information reported in four randomly selected hardrock operations surveys through discussions with officials and a review of case files in three Nevada field offices—Carson City, Elko, and Winnemucca—and one Montana field office—Lewistown. We checked the answers respondents had given to the questions against information contained in the case files. In many cases, staff provided answers based on their own knowledge and information in the case files. Some BLM state offices had difficulty identifying hardrock operations that met our criteria. For example, some states completed our surveys for hardrock operations that did not appear to meet our criteria, and we contacted the respondents to clarify whether the operations did or did not meet the criteria. We eliminated 12 surveys that did not meet the criteria from our analysis. Furthermore, we cannot know whether BLM reported to us all hardrock operations that met our criteria. To address this concern, we took additional steps to help ensure that BLM completed the selected hardrock operations survey for all operations that met our criteria. For example, in Nevada, we compared a list of bankrupt operations prepared by the Nevada Bonding Task Force with a list of BLM’s completed surveys to identify potential omissions. In addition, we asked selected experts, interest groups, and others to identify instances when operators failed to complete required reclamation and the federal government or others paid such reclamation costs or the required reclamation was not fully completed. To the extent that BLM staff did not identify all of the operations that met our criteria or did not report information on those operations that did meet the criteria, the information the BLM staff reported is incomplete. Furthermore, we did not collect information on the thousands of ceased hardrock operations since 1872 that did not require financial assurances and, therefore, fell outside the scope of this review. To determine the reliability and sufficiency of BLM’s LR2000 system, we spoke with BLM information technology officials in the headquarters unit near Denver, Colorado, who are responsible for administering the system; BLM state and field office staff in two states who enter information into the system; and BLM managers at headquarters and in two states who use information from the system. In addition, we visited information technology officials near Denver to discuss the structure and history of LR2000 and to observe firsthand how data are entered into and processed by the two subsystems used to manage financial assurances—the Case Recordation System, which contains information about hardrock operations, and the Bond and Surety System, which contains information about financial assurances. Also, in our two surveys of BLM’s 12 state offices, we asked questions to gather data on whether each respondent used LR2000 to respond to the survey. Specifically, we asked questions about whether the information used to respond came from LR2000 or from state office personnel’s knowledge, field office personnel’s knowledge, other databases, case files, or other sources. These questions helped us determine the extent to which BLM officials used and relied on the data in LR2000. It is important to note that the practical difficulties of conducting any survey introduce various types of errors. Differences in how a particular question is interpreted and differences in the sources of information available to respondents can also be sources of survey response errors. We included steps in both the data collection and data analysis stages to minimize such errors. These steps included developing our survey questions with the aid of our survey specialists, conducting pretests of the questionnaires, and twice verifying the entry of survey data where applicable. In addition to the surveys, we took several steps to understand BLM’s management and oversight of hardrock operations and the use of financial assurances to ensure reclamation. We reviewed GAO reports, federal laws and regulations, BLM documents, and independent studies on hardrock operations and financial assurances. We also discussed these issues with BLM officials at headquarters and in selected state and field offices in Arizona, Montana, Nevada, and Utah. To understand the relationship between BLM and state agencies responsible for overseeing hardrock operations, we met with BLM and state agency officials in Colorado and Nevada, and we reviewed relevant memorandums of understanding and other documents for these and other states. We also discussed relevant hardrock operation and financial assurance issues with experts and representatives from the mining industry, academia, and environmental groups. Finally, to better understand hardrock operations and reclamation requirements, we visited five hardrock operations on BLM land in two states—the Florida Canyon, MacArthur Mine, Olinghouse, and Relief Canyon operations in Nevada and the Zortman and Landusky operation in Montana. We conducted our review from October 2003 through May 2005 in accordance with generally accepted government auditing standards, including an assessment of data reliability. This appendix provides information on the number of notice- and plan-level operations and dollar value of associated financial assurances for the 12 states with existing hardrock operations as of July 2004, as reported by BLM. This appendix provides detailed information obtained from our survey on the 48 hardrock operations that BLM identified as ceased but not reclaimed by the operator since BLM began requiring financial assurances. Specifically, the appendix presents tables 14 through 19 showing: the basic characteristics of the 48 hardrock operations; key reclamation dates; BLM steps to compel operators to reclaim BLM land disturbed by hardrock operations and reasons operators did not reclaim the land; estimated reclamation costs; the types and amount of financial assurances and the amount of financial assurances relinquished and spent on reclamation; and sources of other funds and the status of reclamation. The following are GAO’s comments on the Department of the Interior’s letter dated June 8, 2005. 1. See agency comments and our evaluation section of this report. 2. See agency comments and our evaluation section of this report. 3. We did not change the title of the report because doing so would indicate that adequate financial assurances are in place to guarantee reclamation costs. As we report, this is not the case. 4. We added a sentence to state that plans of operations that were approved before January 20, 2001, were required to have financial assurances in place no later than November 20, 2001. 5. We changed the language to state that BLM has the authority to take steps, such as issuing noncompliance and suspension orders or revoking plans of operations, if operators do not comply with financial assurance or other regulatory requirements. 6. The “other” sources of information on hardrock operations that had ceased and not been reclaimed, as required, are identified in appendix I. 7. We added the National Research Council as one of the other sources used to develop figure 2. 8. We removed step 5, which described leftover material known as tailings, from figure 2. 9. We changed the language to clarify that upon recording a mining claim with BLM, the claimant must pay the fees discussed in our report, and that the location fee is not paid annually. 10. We did not add this language to this section of the report because we explain in the background section of the report that BLM requires all notice- and plan-level hardrock operations to have financial assurances before exploration or mining operations begin. 11. We clarified the language by adding “notice- and plan-level” before hardrock operations. 12. We clarified this sentence in our conclusion to state that “However, while federal regulations and BLM guidance set forth financial assurance requirements for notice- and plan-level hardrock mining operations, BLM has no process for ensuring that the regulations and guidance are effectively implemented to ensure that adequate financial assurances are in place, as required.” Our report shows that BLM state offices with hardrock operations reported that, as of July 2004, some hardrock operations did not have adequate financial assurances. Furthermore, past experience has shown that some hardrock operations have ceased without operators having the adequate financial assurances required by regulations and BLM guidance. We continue to believe that until BLM establishes monitoring and accountability mechanisms to ensure that all hardrock operations have required financial assurances based on sound plans and current cost estimates, these problems will continue. 13. We did not change this sentence in our conclusion because evidence in our report shows that LR2000 does not track the critical information BLM needs to effectively manage financial assurances on hardrock operations. Specifically, we reported that LR2000 does not track some critical information, including the operation’s basic status, such as whether the operation is ongoing or has ceased and should be reclaimed; some types of financial assurances being used, such as corporate guarantees, bond pools, and trust funds; and the adequacy of financial assurances to pay the cost of required reclamation. In addition to the contact named above, Andrea Wamstad Brown, Byron S. Galloway, Heather Holsinger, Carol Herrnstadt Shulman, Walter Vance, and Amy Webbink made key contributions to this report. | Since the General Mining Act of 1872, billions of dollars in hardrock minerals, such as gold, have been extracted from federal land now managed by the Department of the Interior's Bureau of Land Management (BLM). For years, some mining operators did not reclaim land, creating environmental, health, and safety risks. Beginning in 1981, federal regulations required all operators to reclaim BLM land disturbed by these operations. In 2001, federal regulations began requiring operators to provide financial assurances before they began exploration or mining operations. GAO was asked to determine the (1) types, amount, and coverage of financial assurances operators currently use; (2) extent to which financial assurance providers and others have paid to reclaim land not reclaimed by the operator since BLM began requiring financial assurances; and (3) reliability and sufficiency of BLM's automated information system (LR2000) for managing financial assurances for hardrock operations. According to GAO's survey of BLM state offices, as of July 2004, hardrock operators were using 11 types of financial assurances, valued at about $837 million, to guarantee reclamation costs for existing hardrock operations on BLM land. Surety bonds, letters of credit, and corporate guarantees accounted for most of the assurances' value. However, these financial assurances may not fully cover all future reclamation costs for these existing hardrock operations if operators do not complete required reclamation. BLM reported that, as of July 2004, some existing hardrock operations do not have financial assurances and some have no or outdated reclamation plans and/or cost estimates, on which financial assurances should be based. BLM identified 48 hardrock operations on BLM land that had ceased and not been reclaimed by operators since it began requiring financial assurances. BLM reported that the most recent cost estimates for 43 of these operations totaled about $136 million, with no adjustment for inflation; it did not report reclamation cost estimates for the other 5 operations. However, as of July 2004, financial assurances had paid or guaranteed $69 million and federal agencies and others had provided $10.6 million to pay for reclamation, leaving $56.4 million in reclamation costs unfunded. Financial assurances were not adequate to pay all estimated costs for required reclamation for 25 of the 48 operations because (1) some operations did not have financial assurances, despite BLM efforts in some cases to make the operators provide them; (2) some operations' financial assurances were less than the most recent reclamation cost estimates; and (3) some financial assurance providers went bankrupt. Also, cost estimates may be understated for about half of the remaining 23 operations because the estimates may not have been updated to reflect inflation or other factors. BLM's LR2000 is not reliable and sufficient for managing financial assurances for hardrock operations because BLM staff do not always update information and LR2000 is not currently designed to track certain critical information. Specifically, staff have not entered information on each operation, and for those operations that are included, the information is not always current. Also, LR2000 does not track some critical information--operations' basic status, some types of allowable assurances, and state- and county-held financial assurances. Given these limitations, BLM's reliance on LR2000 to manage financial assurances is mixed: headquarters does not always rely on it and BLM state offices' reliance varies. To compensate for LR2000's limitations, some BLM offices use informal record-keeping systems to help manage hardrock operations and financial assurances. BLM has taken some steps and identified others to improve LR2000 for managing financial assurances for hardrock operations. |
technical excellence: furnishing the care in the correct way, for example, performing open-heart surgery skillfully; accessibility: patients being able to get care when needed, for example, getting an appointment with a heart specialist when symptoms first occur; and acceptability: patients’ views of their care, such as being satisfied with the outcome of surgery or the speed with which they get a doctor’s appointment. Accreditation and analysis of performance indicators are methods for gauging whether and to what degree quality health care is provided. Accreditation does not directly measure quality, however; instead, it seeks to ensure that organizational systems necessary to attain quality are in place. Accreditation, a formal designation granted by a third party, is usually based on standards that specify the resources and organizational arrangements needed to deliver good care. For example, standards might set forth staff qualifications or the requirement that an HMO have an effective quality assurance program. During an accreditation survey, a survey team reviews an organization’s policies and procedures and visits the provider to make certain that the standards are being met. The survey team discusses the survey findings with appropriate provider officials and subsequently prepares a written report. If standards are not being met, the HMO usually is given time to take corrective action. If the HMO does not take action within a specified time period, it could lose its accreditation. Performance indicators more directly measure the attributes of quality than does accreditation. Performance indicators frequently measure appropriateness and technical excellence—providers’ actions—and the outcomes of those actions. For example, these indicators provide information about the rate at which certain preventive health care actions are furnished, the mortality rate from certain procedures, or patient satisfaction survey results. Administrative databases, medical records, and patient surveys provide data for measuring these indicators. The results are then compared with preestablished benchmarks or with the performance of other HMOs. also requesting information on health plans to help them make their health care purchasing decisions. Some purchasers believe that the standards required to be met for accreditation might have no bearing on whether quality of care is actually furnished. Others view accreditation requirements as a way of ensuring that systems expected to result in quality care are in place. Because accreditation standards do not directly measure quality, however, many purchasers use a combination of accreditation and an analysis of performance indicators, including outcomes. may contend that poor outcomes are due to their caring for sicker patients. Performance indicators may not be comparable. Nationwide standards for defining and calculating indicator results have not been established. While relying to some extent on several standard indicators, many health plans continue to use their own criteria for collecting data and computing results. Consequently, purchasers cannot systematically compare health plans to determine which one meets their needs. Cost continues to be an overriding concern to virtually all corporate purchasers. However, many large corporate purchasers are using accreditation status and information about specific quality-of-care performance indicators to determine which HMO(s) to offer their employees. According to a recent survey of 384 U.S. employers conducted by Watson Wyatt, a benefits consulting organization, and WBGH, 60 percent of large corporations consider accreditation status by the National Committee for Quality Assurance (NCQA) when deciding to purchase health insurance from an HMO. Nineteen percent also consider accreditation from other organizations. Furthermore, some purchasers evaluate other organizational structures. For example, 55 percent said they evaluate whether a health plan has quality improvement initiatives, and 67 percent determine that the health plan ensures that its providers are qualified. help gauge the quality of care provided by health plans, and 68 percent evaluate the results of consumer satisfaction surveys. NCQA recognized the need for outcome indicators when it released its first HEDIS measures. In July 1996, it released for public comment a new draft version of 75 HEDIS measures based on the recommendations of purchasers, HCFA, and other stakeholders. This new version, which NCQA expects will be used by health plans in 1997, includes a revision of prior HEDIS indicators, a standardized patient satisfaction survey, and more indicators for high-prevalence diseases. The clinical care measures continue to focus on providers’ actions, however, rather than outcomes. NCQA also released another 30 indicators, a few focusing on outcomes. NCQA defines these indicators as a “testing set” to be used by health plans only after evidence has been established that certain criteria are met, such as that the indicator is a valid measure of what it is intended to assess. While NCQA was developing new HEDIS measures, a large group of corporate purchasers and HCFA established the Foundation for Accountability (FAcct) to develop standardized outcome measures. In early fall 1996, the Foundation released eight indicators for treating diabetes, breast cancer, and major depression. Some of these measures focus on outcomes. The Foundation also endorsed an indicator addressing consumers’ satisfaction with health plans. Xerox, a large corporate purchaser, provides an example of a purchaser’s use of quality assessment methods. Xerox’s stated objective is to increase the accountability of health plans contracting with it and to improve the health status of its employees. Xerox officials review health plan reports about the plan’s accreditation status, results on HEDIS performance indicators, access to services, and membership satisfaction. Reports also include goals for each measure as benchmarks. Xerox’s goal is to develop long-term relationships with health plans. To this end, Xerox encourages health plans’ continuous improvement rather than immediately terminating a contract if a plan does not meet specific performance goals. of prior performance. In the past, quality assurance programs focused on the care provided to individual patients, directing improvement activities toward individual “outlier” providers rather than encouraging improvement by health care providers. These efforts were limited to a small number of providers and often resulted in adversarial relations between the reviewers and those being reviewed. Like other large corporate purchasers, HCFA uses an inspection process and analysis of performance indicators to evaluate the quality of care provided to Medicare beneficiaries in risk contract HMOs. HCFA’s HMO Qualification Program is intended to ensure that HMOs with Medicare contracts meet minimum requirements for organizational structures and processes. HCFA’s Medicare PRO Program is intended to measure an HMO’s performance by evaluating indicators for selected diseases or procedures of concern to older Americans. Like accreditation, HCFA’s HMO Qualification Program is an inspection method. HCFA’s initial approval of an HMO to serve Medicare beneficiaries includes this inspection. Thereafter, HCFA personnel visit contracting HMOs at least once every 2 years to monitor their compliance with requirements. HCFA’s inspection team spends several days at the HMO comparing the HMO’s policies and procedures with Medicare requirements. The team informs the HMO of its preliminary findings at the end of the visit and later prepares a formal report. If the HMO has failed to meet one or more requirements, it must submit a corrective action plan, including a timetable for correcting the deficiency. HCFA personnel may revisit the site to monitor compliance at the end of the time period specified in the plan’s timetable or may simply require regular progress reports. If the HMO fails to correct the deficiency in a timely manner, HCFA may terminate its contract or, under some circumstances, impose a civil monetary penalty or suspend Medicare enrollment. This happens rarely, however, and only after repeated HCFA efforts to get the HMO to correct the deficiencies. action. Furthermore, HCFA often found that the same problems existed when it made its next annual monitoring visit. In our August 1995 report, we found the same problems. We concluded that HCFA’s HMO Qualification Program is inadequate to ensure that Medicare HMOs comply with standards for ensuring quality of care. Specifically, this program remains inadequate because HCFA does not determine if HMO quality assurance programs are operating effectively, systematically incorporate the results of PRO review of HMOs or use PRO staff expertise in its compliance monitoring, and routinely collect utilization data that could most directly indicate potential quality problems. We also found that the enforcement processes are still slow when HCFA does find quality problems or other deficiencies at HMOs that do not comply promptly with federal standards. For example, even though one HMO repeatedly did not meet standards during a 7-year period and HCFA received PRO reports indicating that the HMO was providing substandard care to a significant number of beneficiaries, HCFA allowed the HMO to operate as freely as a fully compliant HMO. Like large corporate purchasers’ analysis of performance indicators, the Medicare PRO Program analyzes HMO performance treating certain diseases or performing selected procedures. The PRO Program, however, is substantially changing its approach. substandard providers were identified; HCFA officials found this model to be confrontational, unpopular with the physician community, and of limited effectiveness. Therefore, by the end of 1995, case reviews had been replaced by cooperative projects modeled on continuous quality improvement concepts implemented by mutual agreement between PROs and risk contract HMOs. Provider participation is voluntary. Typically, these cooperative projects involve establishing joint identification of a problem, appropriate performance indicators, and benchmarks. The PRO then measures current HMO performance on these indicators and disseminates these data to the HMOs. HMOs then may choose to participate in the project to improve care. After implementation of corrective action, the PROs again collect data to determine if improvements have been made. Although this process is voluntary, HCFA officials say that they believe most HMOs will welcome the opportunity to collaborate on projects that can improve the quality of care. They do not believe that provider noncooperation will be a significant problem. HCFA officials told us, however, that they still can take action if they have strong indications that an HMO has significant quality-of-care problems. If an HMO refuses to cooperate, HCFA can still apply a range of sanctions, including a letter terminating the HMO’s participation. In one state, we talked with HMO and PRO officials about this new approach. The HMOs liked it, particularly the fact that the PRO provided them with comparative performance data that would be otherwise unavailable to them. PRO officials also felt that this program was more successful than case review because it addressed the care being provided to the majority of beneficiaries rather than the 1 or 2 percent who may be recipients of bad care. Although we think this new approach holds promise, it is too early to evaluate its impact. But an evaluation of this program as soon as feasible is essential because it is such a major departure from previous PRO practice. performance indicators. HCFA also plans to collect data on beneficiaries’ satisfaction with risk contract HMOs. In June 1995, HCFA announced that it was joining FAcct. According to HCFA, it has played a major role in developing the Foundation’s performance indicators for depression, breast cancer, and diabetes. Furthermore, HCFA worked with NCQA on its new HEDIS indicators. HCFA played a role in identifying and defining seven newly released indicators that measure functional status for enrollees over age 60, mammography rates, rate of influenza vaccinations, rate of retinal examinations for diabetics, outpatient follow-up after acute psychiatric hospitalization, utilization of certain appropriate medications in heart attack patients, and smoking cessation programs. HCFA also plans to conduct a survey of Medicare beneficiaries enrolled in managed care. It is developing a survey instrument in cooperation with the Agency for Health Care Policy and Research. Data collected in this survey will include information on member satisfaction, perceived quality of care, and access to care. HCFA officials told us that they plan to have an outside contractor perform annual surveys of a statistically valid sample of Medicare enrollees in every HMO with a Medicare contract. The contractor will use a standard survey and provide a consistent analysis of the information received from beneficiaries. Some large corporate purchasers are sharing performance assessment information with their employees. They believe that individual employees can better choose health plans if they have good information on which to base their enrollment decisions. According to the Watson Wyatt/WBGH survey, 31 percent of large corporate purchasers give their employees information about accreditation status, 25 percent give their employees information about overall health plan performance, 13 percent give their employees HEDIS information, and 47 percent distribute consumer satisfaction survey results. Additionally, 32 percent of the large purchasers surveyed offer financial incentives to their employees to choose plans that they have designated as being of “exceptional quality.” years, that information generally featured premium and benefits coverage. CalPERS’ May 1995 Health Plan Quality/Performance Report was its first effort to distribute comprehensive information that includes both specific performance indicators about quality and member satisfaction results. The quality performance data are based on HEDIS indicators measuring HMO success with providing childhood immunizations, cholesterol screening, prenatal care, cervical and breast cancer screening results, and diabetic eye exams. Employee survey results include employee satisfaction with physician care, hospital care, the overall plan, and the results of a question asking whether members would recommend the plan to a fellow employee or friend. CalPERS released a new report providing updated information in 1996. Although HCFA collects performance information that could be useful to beneficiaries, it does not routinely make such information available to them nor does it have immediate plans to do so. HCFA does not distribute the results of its HMO Qualification Program nor does it distribute information it collects about Medicare HMO enrollment and disenrollment rates, Medicare appeals, beneficiary complaints, plan financial condition, availability of and access to services, and marketing strategies. However, HCFA officials have told us they are considering ways to provide Medicare beneficiaries with information that will help them choose managed care plans. HCFA is working to make comparative information available on the Internet. Phase one of this project, to be implemented in 1997, will provide comparative data about HMO benefits, premiums, and cost-sharing requirements. Later phases will add information on the results of plan member satisfaction surveys and, eventually, outcome indicators. No timetable has been established, however, for disseminating the latter information. In conclusion, large corporate purchasers who rely on experts in the field are the leaders in health care quality assessment. Although HCFA’s current quality assessment programs are catching up with those of large corporate purchasers, some areas need further improvement. Most notably, HCFA still lags behind the private sector in disseminating performance assessment information to its beneficiaries. Messrs. Chairmen and Madam Chairwoman, this concludes my formal remarks. I will be happy to answer any questions from you and other members of the Caucus. For more information on this testimony, please call Sandra K. Isaacson, Assistant Director, at (202) 512-7174. Other major contributors include Peter E. Schmidt. Health Care: Employers and Individual Consumers Want Additional Information on Quality (GAO/HEHS-95-201, Sept. 29, 1995). Medicare: Increased HMO Oversight Could Improve Quality and Access to Care (GAO/HEHS-95-155, Aug. 3, 1995). Medicare: Enhancing Health Care Quality Assurance (GAO/T-HEHS-95-224, July 27, 1995). Community Health Centers: Challenges in Transitioning to Prepaid Managed Care (GAO/HEHS-95-138, May 4, 1995); testimony on the same topic (GAO/T-HEHS-95-143, May 4, 1995). Medicare: Opportunities Are Available to Apply Managed Care Strategies (GAO/T-HEHS-95-81, Feb. 10, 1995). Health Care Reform: “Report Cards” Are Useful but Significant Issues Need to Be Addressed (GAO/HEHS-94-219, Sept. 29, 1994). Home Health Care: HCFA Properly Evaluated JCAHO’s Ability to Survey Home Health Agencies (GAO/HRD-93-33, Oct. 26, 1992). Home Health Care: HCFA Evaluation of Community Health Accreditation Program Inadequate (GAO/HRD-92-93, Apr. 20, 1992). Medicare: HCFA Needs to Take Stronger Actions Against HMOs Violating Federal Standards (GAO/HRD-92-11, Nov. 12, 1991). Health Care: Actions to Terminate Problem Hospitals From Medicare Are Inadequate (GAO/HRD-91-54, Sept. 5, 1991). Medicare: PRO Review Does Not Ensure Quality of Care Provided by Risk HMOs (GAO/HRD-91-48, Mar. 13, 1991). Medicare: Physician Incentive Payments by Prepaid Health Plans Could Lower Quality of Care (GAO/HRD-89-29, Dec. 12, 1988). Medicare: Experience Shows Ways to Improve Oversight of Health Maintenance Organizations (GAO/HRD-88-73, Aug. 17, 1988). Medicare: Issues Raised by Florida Health Maintenance Organization Demonstrations (GAO/HRD-86-97, July 16, 1986). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Health Care Financing Administration's (HCFA) efforts to provide health care quality information to Medicare beneficiaries joining health maintenance organizations (HMO). GAO noted that: (1) corporate purchasers use accreditation and performance measurement monitoring to ensure that HMO furnish quality health care; (2) HCFA is starting to use similar methods to ensure HMO quality; (3) while the use of performance measurement indicators has become popular, such indicators may not be reliable or comparable, and may not be valid measures of quality; (4) 60 percent of large corporations consider HMO accreditation status by the National Committee for Quality Assurance (NCQA), before contracting with HMO; (5) NCQA developed a set of standardized information on HMO focusing on provider actions, rather than patient care outcomes; (6) NCQA recently released in draft form a set of measures based on patient care outcomes; (7) HCFA has joined with a group of corporate purchasers to develop another set of standardized outcome measures; (8) HCFA uses a qualification review program similar to accreditation, along with peer review, to assess health care organizations' quality; and (9) HCFA does not routinely make quality assessment information available to Medicare beneficiaries. |
According to the IFCAP database, in fiscal year 2007 nearly 132,000 miscellaneous obligations, with a total value of nearly $9.8 billion, were created (see table 1). While VA’s Central Office had $2.9 billion in miscellaneous obligations during fiscal year 2007, our review focused on the $6.9 billion in miscellaneous obligations used by VHA's 129 stations, located in every Veterans Integrated Services Network (VISN) throughout the country, for a variety of mission-related activities. (See app. III for a listing of the use of miscellaneous obligations by VISN, and app. IV for a listing of the use of miscellaneous obligations by station.) According to available VHA data, VHA used record estimated obligations of over $6.9 billion for mission-related goods and services. As shown in figure 1, about $3.8 billion (55.1 percent) was for fee-based medical and dental services for veterans, and another $1.4 billion (20.4 percent) was for drugs, medicines, and hospital supplies. The remainder was for, among other things, state veterans homes, transportation of veterans to and from medical centers for treatment, and miscellaneous obligations to logistical support and facility maintenance for VHA medical centers nationwide. Other, such as dietetic proviion, operting supplie, clening ervice, nd d processing. Trporttion of peron/thing. Ste home nd homeless vetersupport. Rent, commniction, nd tilitie inclding gas, electricity, wter, ewer, nd phone. Supplie inclding dr, medicine, hopitsupplie, lood prodct, nd prothetic supplie. Service inclding fee base phyicin, ning, dentl, hopitliztion , rerch, nd prothetic repir. According to VHA contracting and fiscal service officials, using miscellaneous obligations tends to reduce administrative workload and facilitates the payment for contracted goods and services, such as drugs, medicines, and transportation, and for goods and services for which no pre-existing contracts exist, such as fee-basis medical and dental service and utilities. VHA officials stated that miscellaneous obligations facilitate the payment for contracted goods and services when the quantities and delivery dates are not known. A miscellaneous obligation can be created for an estimated amount and then modified as specific quantities are needed or specific delivery dates are set. When a purchase order is created, however, the obligated amount cannot be changed without a modification of the purchase order. According to VHA officials, the need to prepare num modifications to purchase orders could place an undue burden on the limited contracting personnel available at individual centers and could also require additional work on the part of fiscal services personnel. Our preliminary observations on VA policies and procedures indicate they were not designed to provide adequate controls over the use of miscellaneous obligations. According to GAO’s Standards for Internal Control in the Federal Government, agency management is responsible for developing detailed policies and procedures for internal control suitable for their agency’s operations and ensuring that they provide for adequate monitoring by management, segregation of duties, and supporting documentation for the need to acquire specific goods in the quantities purchased. We identified control design flaws in each of these oversight areas, and we confirmed that these weaknesses existed at the three locations where we conducted case studies. Collectively, these control design flaws increase the risk of fraud, waste, and abuse (including employees converting government assets to their own use without detection). New guidance for the use of miscellaneous obligations was released in January 2008 and finalized in May 2008. We reviewed the new guidance and found that while it offered some improvement, it did not fully address the specific control design flaws we identified. Furthermore, VA officials told us that this guidance was not subject to any legal review. Such an analysis is essential to help ensure that the design of policies and procedures comply with all applicable federal appropriations law and internal control standards. We reviewed 42 miscellaneous obligations at the three case study locations and developed illustrative, more detailed information on the extent and nature of these control design flaws. Table 2 summarizes the locations visited, the miscellaneous obligations reviewed at each location, and the extent and nature of control design deficiencies found. To help minimize the use of miscellaneous obligations, VA policy stated that miscellaneous obligations would not be used as obligation control documents unless the contracting authority for a station had determined that purchase orders or contracts would not be required. Furthermore, VA policy required review of miscellaneous obligations by contracting officials to help ensure proper use in accordance with federal acquisition regulations, but did not address the intended extent and nature of these reviews or how the reviews should be documented. Contracting officials were unable to electronically document their review of miscellaneous obligations and no manual documentation procedures had been developed. Our review of 42 miscellaneous obligations prepared at three VHA stations showed that contracting officers were at times familiar with specific miscellaneous obligations at their facilities, but that they had no documented approvals available for review. Furthermore, none of the three sites we visited had procedures in place to document review of the miscellaneous obligations by the appropriate contracting authorities. Effective oversight and review by trained, qualified officials is a key factor in identifying a potential risk for fraud, waste, or abuse. Without control procedures to help ensure that contracting personnel review and approve miscellaneous obligations prior to their creation, VHA is at risk that procurements will not have safeguards established through a contract approach. For example, in our case study at the VA Pittsburgh Medical Center, we found 12 miscellaneous obligations, totaling about $673,000, used to pay for laboratory services provided by the University of Pittsburgh Medical Center (UPMC). The Chief of Acquisition and Materiel Management for the VA Pittsburgh Medical Center stated that she was not aware of the UPMC's laboratory testing service procurements and would review these testing services to determine whether a contract should be established for these procurements. Subsequently, she stated that VISN 4, which includes the VA Pittsburgh Medical Center, was going to revise procedures to procure laboratory testing services through purchase orders backed by reviewed and competitively awarded contracts, instead of funding them through miscellaneous obligations. Another Pittsburgh miscellaneous obligation for about $141,000 was used to fund the procurement of livers for transplant patients. Local officials said that there was a national contract for the services, and that livers were provided at a standardized price of $21,800. However, officials could not provide us with a copy of the contract, nor documentation of the standardized pricing schedule. Therefore, we could not confirm that VHA was properly billed for these services or that the procurement was properly authorized. Furthermore, in the absence of review by contracting officials, controls were not designed to prevent miscellaneous obligations from being used for unauthorized purposes, or for assets that could be readily converted to personal use. Our analysis of the IFCAP database for fiscal year 2007 identified 145 miscellaneous obligations for over $30.2 million that appeared to be used in the procurement of such items as passenger vehicles; furniture and fixtures; office equipment; and medical, dental, and scientific equipment. Although the VA's miscellaneous obligation policy did not address this issue, VA officials stated that acquisition of such assets should be done by contracting officials and not through miscellaneous obligations. Without adequate controls to review and prevent miscellaneous obligations from being used for the acquisition of such assets, it is possible that the VHA may be exposing the agency to unnecessary risks by using miscellaneous obligations to fund the acquisitions of goods or services that should have been obtained under contract with conventional controls built in. In January 2008, VA issued interim guidance effective for all miscellaneous obligations created after January 30, 2008, concerning required procedures for using miscellaneous obligations. The guidance provides that prior to creating a miscellaneous obligation, fiscal service staff are required to check with the contracting activity to ensure that a valid contract is associated with the miscellaneous obligation, except in specific, itemized cases. Under this guidance, the using service is to have the contracting activity determine (1) if a valid procurement authority exists, (2) if a procurement needs to be initiated, and (3) the appropriate method of obligation. Also, this guidance requires that a copy of the head contracting official’s approval be kept with a copy of the miscellaneous obligation for future audit purposes. In addition, the guidance provides that the fiscal service may not create a miscellaneous obligation without appropriate information recorded in the purpose, vendor, and contract number fields on the document. The guidance specifically cites a number of invalid uses for miscellaneous obligations, including contract ambulance, lab tests, blood products, and construction, but did not always specify a procurement process to be used for these items. In May 2008, VHA management finalized the interim guidance. This guidance represents a step in the right direction. It includes a manual process for documenting contracting approval of miscellaneous obligations and specifically states that a miscellaneous obligation cannot be created if the vendor, contract number, and purpose fields are incomplete. However, the new guidance does not address the segregation of duties issues we and others have identified and does not establish an oversight mechanism to ensure that control procedures outlined are properly implemented. In our view, VHA has missed an opportunity to obtain an important legal perspective on this matter. According to VA officials, these policies have not been subject to any legal review. Such a review is essential in ensuring that the policies and procedures comply with federal funds control laws and regulations and any other relevant VA policies or procedures dealing with budgetary or procurement matters. For example, such a review would help ensure that the guidance adequately addresses Federal Acquisition Regulations, requiring that no contract shall be entered into unless the contracting officer ensures that all requirements of law, executive orders, regulations, and all other applicable procedures, including clearances and approvals, have been met. In addition, a review could help to ensure that this guidance (1) provides that all legal obligations of VA are supported by adequate documentation to meet the requirements of the recording statute 31 U.S.C. §1501(a) and (2) prevents any individual from committing the government for purchases of supplies, equipment, or services without being delegated contracting authority as a contracting officer, purchase card holder, or as a designated representative of a contracting officer. The absence of a legal review to determine the propriety of VA’s miscellaneous obligations policies and procedures places VA at risk of not complying with important laws and regulations. In conclusion, Mr. Chairman, without basic controls in place over billions of dollars in miscellaneous obligations, VA is at significant risk of fraud, waste, and abuse. Effectively designed internal controls serve as the first line of defense for preventing and detecting fraud, and they help ensure that an agency effectively and efficiently meets its missions, goals, and objectives; complies with laws and regulations; and is able to provide reliable financial and other information concerning its programs, operations, and activities. Although miscellaneous obligations can facilitate and streamline the procurement process, they require effectively designed mitigating controls to avoid impairing full accountability and transparency. In the absence of effectively designed key funds and acquisition controls, VA has limited assurance that its use of miscellaneous obligations is kept to a minimum, for bona fide needs, in the correct amount, and to the correct vendor. Improved controls in the form of detailed policies and procedures, along with a management oversight mechanism, will be critical to reducing the government’s risks from VA’s use of miscellaneous obligations. To that end, our draft report includes specific recommendations, including a number of preventive actions that, if effectively implemented, should reduce the risks associated with the use of miscellaneous obligations. We are making recommendations to VA to modify its policies and procedures, in conjunction with VA’s Office of General Counsel, to better ensure adequate oversight of miscellaneous obligations by contracting officials, segregation of duties throughout the process, and sufficient supporting documentation for miscellaneous obligations. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For more information regarding this testimony, please contact Kay Daly, Acting Director, Financial Management and Assurance, at (202) 512-9095 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In order to determine how VHA used miscellaneous obligations during fiscal year 2007, we obtained and analyzed a copy of VHA’s Integrated Funds Distribution, Control Point Activity, Accounting and Procurement (IFCAP) database of miscellaneous obligations for that year. IFCAP is used to create miscellaneous obligations (VA Form 4-1358) at VA, and serves as a feeder system for VA’s Financial Management System (FMS)— the department’s financial reporting system of record. According to VA officials, FMS cannot be used to identify the universe of miscellaneous obligations at VHA in fiscal year 2007 because FMS does not identify the procurement method used for transactions (i.e., miscellaneous obligations, purchase card, purchase order). Furthermore, FMS does not capture the contract number, requester, approving official, and obligating official for obligations. However, according to senior agency officials, the IFCAP database is the most complete record of miscellaneous obligations available at VHA and can be used to provide an assessment of how miscellaneous obligations were used during fiscal year 2007. IFCAP’s data included information on the appropriation codes, vendors, budget object codes (BOC), date and amount of obligations, obligation numbers, approving officials, and VISN and VHA station for VHA miscellaneous obligations. We converted the database to a spreadsheet format and sorted the data by VISN, station, and BOC to determine where and how miscellaneous obligations were used in fiscal year 2007 (see app. III and IV). To determine whether VHA’s polices and procedures are designed to provide adequate controls over the use of miscellaneous obligations, we first reviewed VHA’s policies and procedures governing the use of miscellaneous obligations at VA. Specifically, we reviewed the VA Controller Policy, MP-4, Part V, Chapter 3, Section A, Paragraph 3A.02 – Estimated Miscellaneous Obligation or Change in Obligation (VA Form 4-1358); the VA Office of Finance Bulletin 06GA1.05, Revision to MP-4, Part V, Chapter 3, Section A, Paragraph 3A.02 – Estimated Miscellaneous Obligation or Change in Obligation (VA Form 4-1358), dated September 29, 2006; VA Interim Guidance on Miscellaneous Obligations, VA Form 1358, dated January 30, 2008; VHA Revised Guidance for Processing of Miscellaneous Obligations, VA Form 1358, dated May 18, 2008; and other VA and VHA directives, policies, and procedures. We also used relevant sections of the Federal Acquisition Regulations (FAR); VA’s Acquisition Regulations; appropriation law; and GAO’s Standards for Internal Control in the Federal Government in assessing the design of VA's policies and procedures, and we met with VA and VHA officials in Washington, D.C., and coordinated with VHA’s Office of Inspector General staff to identify any previous audit findings relevant to our audit work. We also interviewed representatives of VA’s independent public accounting firm and reviewed copies of their reports. In order to better understand the extent and nature of VA policy and procedure design deficiencies related to miscellaneous obligations, we conducted case studies at three VHA stations in Cheyenne, Wyoming; Kansas City, Missouri; and Pittsburgh, Pennsylvania. The stations in Kansas City and Pittsburgh were selected because they had a high volume of miscellaneous obligation activity, and they were located in different regions of the country. We conducted field work at the Cheyenne, Wyoming, station during the design phase of our review to better understand the extent and nature of miscellaneous obligation control design deficiencies at a small medical center. Inclusion of the Cheyenne facility in our review increased the geographic diversity of our analysis and allowed us to compare the extent and nature of miscellaneous obligation design deficiencies at medical centers in the eastern, midwestern, and western portions of the United States. During the case studies, we met with senior medical center administrative, procurement, and financial management officials to discuss how VA policies and procedures were designed with regard to specific obligations, and assess the control environment design for using miscellaneous obligations at the local level. We discussed how miscellaneous obligations were used as part of the procurement process and the effect of new VHA guidance on medical center operations. We also reviewed the design of local policies and procedures for executing miscellaneous obligations and conducted walk-throughs of the processes. To provide more detailed information on the extent and nature of the control design deficiencies we found at our case study locations, we identified a nongeneralizable sample of obligations for further review at each site. Through data mining techniques, we identified a total of 42 miscellaneous obligations for more detailed examination at our case studies: 11 from Cheyenne, 17 from Kansas City, and 14 from Pittsburgh. We based our selection on the nature, dollar amount, date, and other identifying characteristics of the obligations. For each miscellaneous obligation selected, we accumulated information on the extent and nature of control design weaknesses concerning miscellaneous obligations: review and documentation by contracting officials; segregation of duties during the procurement process; and the purpose, timing, and documentation for obligations. Concerning the adequacy of control design with respect to contracting review, we reviewed miscellaneous obligations for evidence of review by contracting officials and, for selected miscellaneous obligations, followed up with contracting officials to discuss contracts in place for miscellaneous obligations, whether review by contracting officials was needed, and when and how this review could occur and be documented. Concerning the control design deficiencies with respect to segregation of duties, we reviewed miscellaneous obligation documents to determine which officials requested, approved, and obligated funds for the original miscellaneous obligations and then which officials certified delivery of goods and services and approved payment. We noted those instances where control design deficiencies permitted one official to perform multiple functions. With respect to control design deficiencies relating to the supporting documentation for the miscellaneous obligations, we reviewed the purpose, vendor, and contract number fields for each obligation. For the purpose field, we assessed whether the required description was adequate to determine the nature, timing, and extent of the goods and/or services being procured and whether controls provided for an adequate explanation for any estimated miscellaneous obligation amounts. For the vendor and contract number fields, we assessed whether controls were designed to ensure entered information was correct, and we identified those instances where control deficiencies permitted fields to be left blank. Because of time limitations, we did not review VHA’s procurement or service authorization processes. In addition, in our case study approach, we were unable to analyze a sufficient number of obligations to allow us to generalize our conclusions to the sites visited, nor to the universe of VHA medical centers. The 42 obligations represented a total of approximately $36.0 million; however, the results cannot be projected to the overall population of miscellaneous obligations in fiscal year 2007. While we found no examples of fraudulent or otherwise improper purchases made by VHA, our work was not specifically designed to identify such cases or estimate its full extent. We assessed the reliability of the IFCAP data provided by (1) performing various testing of required data elements, (2) reviewing related policies and procedures, (3) performing walkthroughs of the system, (4) interviewing VA officials knowledgeable about the data, and (5) tracing selected transactions from source documents to the database. In addition, we verified that totals from the fiscal year 2007 IFCAP database agreed with a method of procurement compliance report provided to Subcommittee staff during a September 7, 2007 briefing. We did not reconcile the IFCAP miscellaneous obligations reported to us to FMS—the VA system of record—and published VA financial statements because FMS does not identify the procurement method used for transactions (i.e., miscellaneous obligations, purchase card, purchase order). We determined that the data were sufficiently reliable for the purposes of our report and that they can be used to provide an assessment of how miscellaneous obligations were used during fiscal year 2007. We briefed VA and VHA headquarter officials, including the Deputy Assistant Secretary for Logistics and Acquisition, as well as VHA officials at the three case study locations, on the details of our audit, including our findings and their implications. During the briefings officials generally agreed with our findings and said that they provided useful insights into problems with the miscellaneous obligation process and corrective actions that could be taken to address them. We conducted this audit from November 2007 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We recently provided our draft report to the Secretary of Veterans Affairs for review and comment. Following this testimony, we plan to issue a report, which will incorporate VA’s comments as appropriate and include recommendations for improving internal controls over miscellaneous obligations. The Department of Veterans Affairs (VA) is responsible for providing federal benefits to veterans. Headed by the Secretary of Veterans Affairs, VA operates nationwide programs for health care, financial assistance, and burial benefits. In fiscal year 2007, VA received appropriations of over $77 billion, including over $35 billion for health care and approximately $41.4 billion for other benefits. The Congress appropriated more than $87 billion for VA in fiscal year 2008. The Veterans Health Administration (VHA) is responsible for implementing the VA medical assistance programs. In fiscal year 2007, VHA operated more than 1,200 sites of care, including 155 medical centers, 135 nursing homes, 717 ambulatory care and community-based outpatient clinics, and 209 Readjustment Counseling Centers. VHA health care centers provide a broad range of primary care, specialized care, and related medical and social support services. The number of patients treated increased by 47.4 percent from 3.8 million in 2000 to nearly 5.6 million in 2007 due to an increased number of veterans eligible to receive care. As shown in figure 2, VHA has organized its health care centers under 21 Veterans Integrated Services Networks (VISN), which oversee the operations of the various medical centers and treatment facilities within their assigned geographic areas. During fiscal year 2007, these networks provided more medical services to a greater number of veterans than at any time during VA’s long history. VA has used “Estimated Miscellaneous Obligation or Change in Obligation” (VA Form 4-1358) to record estimated obligations for goods and services for over 60 years. According to VA policy, miscellaneous obligations can be used to record obligations against appropriations for the procurement of a variety of goods and services, including fee-based medical, dental, and nursing services; non-VA hospitalization; nursing home care; beneficiary travel; rent; utilities; and other purposes. The policy states that miscellaneous obligations should be used as obligation control documents when a formal purchase order or authorization is not required, and when necessary to record estimated obligations to be incurred by the subsequent issue of purchase orders. The policy also states that the use of miscellaneous obligations should be kept to an absolute minimum, consistent with sound financial management policies regarding the control of funds, and should only be used in cases where there was a bona fide need for the goods and services being procured. In September 2006, VA policy for miscellaneous obligations was revised in an attempt to minimize the use of miscellaneous obligations as an obligation control document. The revision states that miscellaneous obligations should not be used as an obligation control document unless the head contracting official for the station has determined that a purchase order or contract will not be required. However, the policy provides that fiscal staff can use miscellaneous obligations as a tracking mechanism for obligations of variable quantity contracts, as well as for public utilities. In January 2008, VA issued interim guidance regarding the use of miscellaneous obligations; however, the guidance did not apply to the fiscal year 2007 miscellaneous obligations we reviewed. In recent years VHA has attempted to improve its oversight of miscellaneous obligations. For example, VHA’s Clinical Logistics Group created the Integrated Funds Distribution, Control Point Activity, Accounting and Procurement (IFCAP) system database in April 2006 to analyze the use of miscellaneous obligations agencywide. The database is updated on a monthly basis and contains information on the miscellaneous obligations created monthly by the 21 VISN offices and their associated stations. VHA officials are using the IFCAP database to (1) analyze the number and dollar amounts of procurements being done using contracts and purchase cards, and recorded using miscellaneous obligations, and (2) identify the types of goods and services recorded as miscellaneous obligations. Prior to the creation of the IFCAP database, such information on the use of the miscellaneous obligations nationwide was not readily available to VHA upper level management. The creation and processing of miscellaneous obligations (VA Form 4- 1358) is documented in IFCAP—a component of VA’s Veterans Health Information System and Technology Architecture (VISTA). The miscellaneous obligation request passes through several stages illustrated in figure 3. Pacific Islands HCS (Honolulu) James E. Van Zandt VA(Altoona) $6,908,827,084 This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Veterans Health Administration (VHA) has been using miscellaneous obligations for over 60 years to record estimates of obligations to be incurred at a later time. The large percentage of procurements recorded as miscellaneous obligations in fiscal year 2007 raised questions about whether proper controls were in place over the authorization and use of billions of dollars. GAO's testimony provides preliminary findings related to (1) how VHA used miscellaneous obligations during fiscal year 2007, and (2) whether the Department of Veterans' Affairs (VA) policies and procedures were designed to provide adequate controls over their authorization and use. GAO recently provided its related draft report to the Secretary of Veterans Affairs for review and comment and plans to issue its final report as a follow-up to this testimony. GAO obtained and analyzed available VHA data on miscellaneous obligations, reviewed VA policies and procedures, and reviewed a nongeneralizable sample of 42 miscellaneous obligations at three case study locations. GAO's related draft report includes four recommendations to strengthen internal controls governing the authorization and use of miscellaneous obligations, in compliance with applicable federal appropriations law and internal control standards. VHA recorded over $6.9 billion of miscellaneous obligations for the procurement of mission-related goods and services in fiscal year 2007. According to VHA officials, miscellaneous obligations were used to facilitate the payment for goods and services when the quantities and delivery dates are not known. According to VHA data, almost $3.8 billion (55.1 percent) of VHA's miscellaneous obligations was for fee-based medical services for veterans and another $1.4 billion (20.4 percent) was for drugs and medicines. The remainder funded, among other things, state homes for the care of disabled veterans, transportation of veterans to and from medical centers for treatment, and logistical support and facility maintenance for VHA medical centers nationwide. GAO's Standards for Internal Control in the Federal Government states that agency management is responsible for developing detailed policies and procedures for internal control suitable for their agency's operations. However, based on GAO's preliminary results, VA policies and procedures were not designed to provide adequate controls over the authorization and use of miscellaneous obligations with respect to oversight by contracting officials, segregation of duties, and supporting documentation for the obligation of funds. Collectively, these control design flaws increase the risk of fraud, waste, and abuse (including employees converting government assets to their own use without detection). These control design flaws were confirmed in the case studies at Pittsburgh, Cheyenne, and Kansas City. In May 2008, VA issued revised guidance concerning required procedures for authorizing and using miscellaneous obligations. GAO reviewed the revised guidance and found that while it offered some improvement, it did not fully address the specific control design flaws GAO identified. Furthermore, according to VA officials, VA's policies governing miscellaneous obligations have not been subject to legal review by VA's Office of General Counsel. Such a review is essential in ensuring that the policies and procedures comply with applicable federal appropriations law and internal control standards. |
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) significantly changed federal welfare policy for low- income families with children, from a program that entitled eligible families to monthly cash payments to a capped block grant that emphasizes employment and work supports for most adult recipients. As part of PRWORA, Congress created the TANF program, through which HHS provides states about $16.5 billion each year in block grant funds to implement the program. To receive the TANF block grant, each state must also spend at least a specified level of its own funds, which is referred to as state maintenance of effort (MOE).block grant, PRWORA defines four goals for the program: 1. provide assistance so that children could be cared for in their own homes or in the homes of relatives; 2. end families’ dependence on government benefits by promoting job preparation, work, and marriage; 3. prevent and reduce the incidence of out-of-wedlock pregnancies; and 4. encourage the formation and maintenance of two-parent families. TANF is a flexible funding stream that states can use to provide cash assistance and a wide range of services that are “reasonably calculated” to further the program’s four goals. In federal fiscal year 2011, states used about 29 percent of their TANF funds on basic assistance that included cash assistance for needy families, and the remaining funds were spent on other purposes, such as child care, employment programs, and child welfare services. Due to the flexibility given to states, TANF programs differ substantially by state. States are required to develop plans that outline their intended use of funds and report data on families receiving assistance. While the federal TANF statute does not define “assistance,” HHS defines assistance in regulation as cash payments, vouchers, and other forms of benefits designed to meet a family’s “ongoing basic needs,” such as food, clothing, shelter, utilities, household goods, personal-care items, and general incidental expenses. Traditionally, states disbursed cash assistance benefit payments by means of paper check. The EBT program was devised in the 1980s originally to meet the needs of the Department of Agriculture’s (USDA) Food Stamp Program, in which federal benefits were electronically disbursed to eligible recipients. These cards are not tied to a consumer asset account, and generally the account structures and processing requirements differ from other payment cards. EBT cards can be used to deliver benefits to banked and unbanked recipients and can be used to deliver multiple benefits using a single card. The cost savings in the Food Stamp Program (now known as the Supplemental Nutrition Assistance Program or SNAP) from using electronic payments to distribute benefits prompted states to use EBT cards to also distribute TANF benefits electronically, leveraging the existent EBT system designed for SNAP. Electronic benefit distribution methods also include Electronic Payment Cards (EPC). Some EPC cards are prepaid or debit cards that are branded with a MasterCard, American Express, Discover, or Visa logo, which allows cardholders to conduct signature-based transactions anywhere that those brands are accepted as well as at ATM and point-of- sale (POS) machines. Electronic benefit cards—both EBT and EPC—generally can be used like traditional debit or credit cards, in that recipients can use them at ATMs to withdraw cash, or at retailers’ POS terminals for purchases or to receive cash by selecting a cash-back option. However, there are some key differences between the electronic benefit card and commercial credit cards. The main difference is that electronic benefit cards do not carry a credit line, and the purchases or withdrawals made with these cards cannot exceed the amount of recipients’ TANF benefits. With commercial credit cards, cardholders borrow to make a purchase and then pay the money back later. Electronic benefit cards are more like debit or stored- value cards and provide an alternative to cash—each time that a cardholder uses his or her electronic benefit card, the money spent or withdrawn is deducted from the cardholder’s TANF benefits account. States consider various factors when implementing EBT or EPC programs, including potential financial burden to recipients, such as transaction fees at ATMs that charge a surcharge for each transaction; recipient characteristics, such as disabilities; implementation costs; and fraud and security risks. States also take into account how readily recipients can access cash assistance. For example, in some rural areas or low-income neighborhoods the only access point for cash assistance benefits may be a location such as a grocery store, single depository institution, or even a liquor store. Some of the benefits to recipients from states choosing EBT or EPC programs include quicker disbursement of benefits, the elimination of lost or undelivered paper checks, access to benefits without an established bank account, and no need to locate check-cashing venues in order to access benefits. Prior to 2012, states were not required under federal law to take steps aimed at preventing specific TANF transactions at certain locations. However, the Welfare Integrity and Data Improvement Act, part of the Middle Class Tax Relief and Job Creation Act of 2012, signed into law on February 22, 2012, introduced several changes to TANF that can affect recipients’ ability to access cash assistance at certain locations. Specifically, the Act requires that each state receiving a TANF block grant maintain policies and practices as necessary to prevent TANF assistance from being used in any “electronic benefit transfer transaction” in any liquor store; any casino, gambling casino, or gaming establishment; or any retail establishment that provides adult-oriented entertainment in which performers disrobe or perform in an unclothed state for entertainment. The Act calls for HHS to determine whether states have implemented and maintained policies and practices to prevent such transactions, within 2 years of the Act’s enactment. If HHS determines that a state has not implemented and maintained these policies and practices, or if a state has not reported to HHS on its policies and practices, HHS may reduce the state’s family assistance grant by an amount equal to 5 percent of the state’s grant amount for the federal fiscal year following the 2-year period after enactment and each succeeding federal fiscal year in which the state does not demonstrate that it has implemented and maintained such policies and practices. However, HHS may reduce the amount of this penalty on the basis of the degree of noncompliance of the state in question. In addition, the Act specifies that states are not responsible for individuals who engage in fraudulent activity to circumvent the state’s policies and practices, and will not face a reduction in their family assistance grant amounts in such cases. The Act defines liquor store as “any retail establishment which sells exclusively or primarily intoxicating liquor. Such term does not include a grocery store which sells both intoxicating liquor and groceries including staple foods (within the meaning of section 3(r) of the Food and Nutrition Act of 2008 (7 U.S.C. 2012(r))).” Id. The Act also contains requirements for states related to maintaining recipients’ access to TANF cash assistance. As part of the plan that each state is required to submit to HHS, states must include policies and procedures to ensure that recipients have adequate access to their cash assistance. In addition, states must ensure that recipients have access to using or withdrawing assistance with minimal fees or charges, including an opportunity to access assistance with no fees or charges, and that they are provided information on applicable fees and surcharges that apply to electronic fund transactions involving the assistance, and that such information is made publicly available. HHS issued a request for public comment in April 2012, seeking information by June 2012 on: how states deliver TANF assistance to beneficiaries, whether states have implemented policies and practices to prevent electronic benefit transfer transactions at the locations mentioned above, states’ experiences with these policies and practices, and other similar restrictions states place on TANF assistance usage. In its notice, HHS identified multiple questions for states to answer, including questions on the methods states use to track the locations where transactions occur, challenges states experienced when implementing any restrictions on transactions at certain locations, the initial and ongoing costs of restrictions, the effectiveness of restrictions and the factors influencing the effectiveness, and any concerns that have been raised about the restrictions, among other things. In addition, HHS requested input from states’ EBT vendors on potential issues that states may face in implementing restrictions, including technical issues, cost implications, access implications, and mechanisms for addressing problems identified. Six of the 10 states we reviewed have taken steps to prevent certain types of inappropriate TANF transactions—restrictions that in some cases are broader than recent federal requirements that require states to take steps aimed at preventing transactions in casinos, liquor stores, and adult-entertainment establishments. These 6 states faced a variety of challenges in identifying inappropriate locations and preventing transactions at these locations. At the time these efforts were undertaken, there were no federal requirements that required states to take steps aimed at restricting such transactions. In addition, EBT transaction data from federal fiscal year 2010 from 4 of the 10 selected states were generally incomplete or unreliable, and were of limited use to the states for systematically identifying or monitoring inappropriate locations. While the federal requirements to restrict inappropriate transactions now exist, data issues and other challenges, if unaddressed, may continue to affect efforts to comply with these new requirements. Six of the 10 states we selected and reviewed have taken steps to prevent certain types of TANF transactions; these actions vary in their degree and means of implementation, from widespread disabling of EBT access at ATMs in certain locations across a state to, according to officials from one state, passing a law without implementing steps for enforcing it. The restrictions generally involve prohibiting the use of EBT cards at certain locations or prohibiting purchases of certain goods or services, or both, as shown in figure 1 below. In 4 of the 10 selected states, there were no restrictions on TANF transactions, as no transactions were unauthorized based on the location of the transactions or the nature of the goods or services purchased. As mentioned above, before the 2012 enactment of federal legislation, states were not required by the federal government to maintain or implement policies and practices aimed at preventing TANF transactions based on the location of the transactions. Figure 1 below, an interactive map, provides rollover information (see interactive instructions below) that describes the steps that selected states have taken aimed at preventing the use of TANF cash assistance for certain purchases or in certain locations. (See app. II for the steps taken within each selected state.) The purpose of TANF is to help needy families achieve self-sufficiency. Providing TANF benefits by means of electronic benefit cards helps the banked and unbanked TANF recipients, gives TANF recipients an alternate to cash, and allows states to use existing infrastructures. However, any misuse of TANF funds not only deprives low-income families of needed assistance, but also diminishes public trust in both the integrity of the program and the federal government. Before Congress passed the Welfare Integrity and Data Improvement Act, as part of the Middle Class Tax Relief and Job Creation Act of 2012, some states acted independently to implement restrictions on certain TANF transactions. As a result, their approach to enacting restrictions varies significantly. However, until HHS issues regulations or provides further guidance as to what policies and practices are sufficient to comply with the new federal requirements, it is unclear to what extent the various restrictions implemented by states would be in compliance. The experience of these states—especially any information related to the cost-effectiveness and success rates for various restrictions—could be beneficial for HHS to consider as it works toward determining what policies and practices are sufficient to comply with the new federal law. As we heard from officials in multiple states, preventing unauthorized transactions can be time- intensive and is impaired by flaws in available transaction data and other challenges. Addressing the limitations we found in transaction data that impede the identification and monitoring of certain locations could require significant resources. Therefore, restriction methods that do not rely on flawed transaction data may be the most practical, such as Washington state’s requirement for businesses to independently disable EBT access or risk losing or not obtaining their state licenses to operate. We provided a draft of this report to HHS for comment. In its written comments, reproduced in appendix III, HHS noted that our report highlights many of the challenges and issues states and others face in issuing the TANF requirements that Congress enacted in February 2012. In addition, HHS stated that our report’s findings and analysis will be helpful as HHS drafts implementing regulations relevant to these TANF requirements. HHS also provided technical comments that we incorporated, as appropriate. In May 2012, we also provided the 10 selected states with an opportunity to comment on our draft findings relevant to their specific TANF programs. In May 2012, 7 of the 10 selected states provided us with technical comments by e-mail, and we incorporated their technical comments as appropriate. Three states, Illinois, Massachusetts, and Pennsylvania, had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to other interested congressional committees and the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objective was to determine the extent to which selected states have taken action to prevent unauthorized Temporary Assistance for Needy Families (TANF) transactions. To conduct our work, we reviewed TANF laws, regulations, and other documentation—including the Welfare Integrity and Data Improvement Act, part of the Middle Class Tax Relief and Job Creation Act of 2012, which introduced new state requirements for preventing certain TANF transactions—and interviewed officials from Health and Human Services (HHS). From each selected state, we reviewed information related to its laws, policies, practices, and other factors affecting its TANF program. In addition, we interviewed and reviewed documentation from several key industry stakeholders related to states’ efforts to prevent unauthorized TANF transactions. We also interviewed officials from the top 10 states in terms of TANF basic block- grant dollars—California, New York, Michigan, Ohio, Pennsylvania, Illinois, Florida, Texas, Massachusetts, and Washington. Together, these 10 states represent a total of 66 percent of TANF basic block-grant funds. The industry stakeholders included: JP Morgan Chase and Affiliated Computer Services, the two largest vendors providing TANF electronic benefit card services to the states; the Electronic Funds Transfer Association, an industry trade association that conducts work related to electronic benefit card services for government agencies at the federal and state level; the National Conference of State Legislatures, a bipartisan organization that provides research and other services to state legislators and their staff; and the American Public Human Services Association, a bipartisan, nonprofit organization representing appointed state and local health and human-services agency commissioners. We obtained electronic benefit card transaction data from 4 of the 10 selected states—California, Florida, New York, and Texas—covering transactions from federal fiscal year 2010. We selected these 4 states based on geographical diversity. The results of our analysis of these 4 states’ data cannot be generalized to other states. Using these data, we assessed the extent to which the data would allow the 4 selected states to conduct systematic monitoring of TANF transactions. Such monitoring might include an assessment of the prevalence of transactions at certain locations. To do so, we used a generalizable, random sample of each of the 4 selected states’ Electronic Benefit Transfer (EBT) transaction data and compared it to electronic geo-coding information that pinpoints places and Subsequent visual inspection and manual cleaning of identifies locations.obvious address errors in the EBT data only resulted in a small portion of corrected location addresses. We also assessed whether the data would allow states to identify individual TANF transactions at certain types of locations. To do so, we conducted keyword searches of merchant names for terms that are potentially associated with casinos, liquor stores, and adult-entertainment establishments. We performed data checks to determine the reliability of the California, Florida, New York, and Texas EBT data for the purposes of our engagement. For all four states, we determined that the EBT data are not sufficiently reliable for the purpose of performing systematic monitoring, as the selected states’ data contained incomplete or inaccurate information for the addresses of the locations where the transactions occurred. Given the combination of both completeness and accuracy issues in the 4 selected states, we also determined most of the data in the 4 selected states could not match to address location information that would allow for suitable comparisons to other potential data sources. However, we determined that the transaction data would support keyword searches of merchant names for terms that are associated potentially with casinos, liquor stores, and adult-entertainment establishments, for records that contain merchant names. We conducted this performance audit from October 2012 to July 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The table below includes figure 1’s (see above) rollover information and describes the steps that 6 of the 10 states we reviewed have taken that are aimed at preventing the use Temporary Assistance for Needy Families (TANF) cash assistance for certain purchases or in certain locations. In addition to the contact named above, Cindy Brown Barnes, Assistant Director; Erika Axelson, Assistant Director; Christopher W. Backley; Melinda Cordero; Justin Fisher; Katherine Forsyth; Gale Harris; Olivia Lopez; Grant Mallie; Flavio J. Martinez; Maria McMullen; James Murphy; Anna Maria Ortiz; Robert C. Rodgers; Rebecca Shea; and Timothy Walker made key contributions to this report. | The TANF block grant program provides federal grants to states for various benefits and activities, including cash welfare for needy families with children. TANF is overseen at the federal level by HHS, and administered by states. Most states disburse TANF cash assistance through electronic benefit cards, which can be used to withdraw money or make purchases. Media coverage highlighted cases of cardholders accessing benefits at casinos and other locations that were considered inconsistent with the purpose of TANF. In February 2012, Congress passed a law requiring states to prevent certain transactions at casinos, liquor stores, and adult-entertainment establishments. Within 2 years of enactment, the law also requires HHS to oversee states compliance with these requirements. GAO was asked to review the ability of TANF recipients to withdraw TANF funds at certain locations inconsistent with the purpose of TANF, such as gambling or other establishments. To do so, GAO reviewed documentation and interviewed officials from HHS, key industry stakeholders, and the top 10 states in TANF basic block grant dollars. GAO also assessed the completeness and accuracy of EBT transaction data from federal fiscal year 2010 from 4 of the 10 states selected. GAO selected these 4 states on the basis of geographical diversity, and the results of this data analysis cannot be generalized to other states. Six of the 10 states reviewed by GAO took steps aimed at preventing certain Temporary Assistance for Needy Families (TANF) transactions determined to be inconsistent with the purpose of TANF, despite no federal requirement to do so at the time. Restrictions are based on selected states laws, executive orders, and other regulations, and generally cover certain locations or certain types of purchases such as alcohol. In some cases, states restrictions are broader than the new federal requirements. These restrictions vary in their degree and means of implementation, including widespread disabling of Electronic Benefit Transfer (EBT) access at automated teller machines located at certain locations across a state, such as at casinos. The other 4 states had no restrictions because no laws, executive orders, or other regulations prohibited certain transactions based on the location of the transactions or the nature of the goods or services purchased. These states did not implement restrictions due to concerns about cost-effectiveness or technical limitations, according to state officials. Challenges experienced by states in implementing their current restrictions could inhibit future restriction efforts, including those intended to address new federal requirements. These challenges included difficulties with identifying certain locations that could be prohibited and limitations in available data. For example, the transaction data states receive do not contain information that is accurate or detailed enough for them to identify locations that can potentially be prohibited or restricted. State officials suggested that improvements in the completeness and accuracy of transaction data might better enable them to prevent such transactions. In its assessment of the EBT transaction data from 4 states, GAO found that the data are insufficient for systematic monitoring. To effectively conduct systematic monitoring, including the identification of locations that could be blocked from TANF access, data should be complete and accurate. However, addressing the limitations that GAO found in the transaction datasuch as requiring accurate merchant category codes for retailerscould involve significant resources. States that prohibit certain types of purchases generally do not have ways to track what items recipients buy with their cards, partially due to the lack of information in transaction data on specific goods or services purchased. States were also challenged in attempting to track the spending of cash withdrawn with cards. With no controls on how or where individuals spend withdrawn cash, a recipient could withdraw money at an authorized location and use it at certain locations or for certain purchases restricted by some states. As of July 2012, the Department of Health and Human Services (HHS) was at the beginning of its rulemaking process and did not yet know what form its regulations would take. Until HHS issues regulations or provides further guidance as to what policies and practices are sufficient to comply with new federal requirements, it is unclear to what extent the various restrictions implemented by states would be in compliance. States restrictions could help inform HHSs oversight efforts, especially any information on the cost-effectiveness and success rates for various state restrictions. Restriction methods that do not rely on flawed transaction data may be the most practical. We provided HHS with a draft of our report for comment. HHS stated that our reports findings and analysis will be helpful as it drafts implementing regulations, and it provided technical comments that we incorporated, as appropriate. GAO is not making any recommendations. |
An inherent right of sovereignty, eminent domain is a government’s power to take private property for a public use while compensating the property owner. Eminent domain is also referred to as “appropriation,” “condemnation,” and “taking.” The Fifth Amendment of the United States Constitution expressly restricts the federal government’s use of eminent domain; it requires that eminent domain be invoked only for a “public use” and “just compensation” be paid to those whose property has been taken. The Fourteenth Amendment extends the legal requirements of public use and just compensation to the states through its Due Process Clause. In addition, states have a number of constitutional provisions, statutes, and case law outlining the various permissible uses of eminent domain, recourse available to property owners, and procedures required to take or evaluate a property. State legislatures generally determine who may use eminent domain by delegating eminent domain authority to state or quasi- public entities, such as housing, transport, and urban renewal authorities, which may exercise that power only for the purpose for which it was established. States may also grant eminent domain authority to local governments, which may further delegate this authority to a designee, such as a development authority or community group. Finally, some states authorize private companies to exercise eminent domain—for example, for the provision of utility services. Courts have addressed the meaning and application of public use in numerous cases throughout the years. In 2005, the United States Supreme Court, in Kelo v. City of New London, upheld the City of New London’s authority to use eminent domain to condemn and acquire property located within an area designated as a “distressed municipality,” even though the condemned property was not blighted or otherwise in poor condition. This decision allowed for private-to-private transfers of property for economic development purposes, such as New London’s action in an area that had experienced decades of economic decline. According to some scholars, the use of eminent domain for such a purpose has been permitted since the “mill acts” of the colonial and pre-Revolutionary period that permitted the flooding of private property to allow the operation of mills downstream; mills were considered the main source of power and closely linked to economic development. The Supreme Court emphasized that the Kelo decision did not preclude states from placing further restrictions on the exercise of eminent domain. Many states have been reviewing the use of eminent domain and considering legislative changes or constitutional amendments to control its use. In addition to the Constitution, the Uniform Relocation Assistance and Real Property Acquisition Policies Act of 1970 sets the federal standard for acquisition of real property for public projects involving federal financial assistance, including prescribing specific benefits, treatment, and protections for those whose property is acquired. The act also contains requirements for property owner notification and property valuation, as well as prohibitions against offers to property owners being less than an approved appraisal value. In addition, the act addresses compensation and seeks to ensure the fair and equitable treatment and protection from disproportionate injury of persons displaced from their homes, businesses, or farms in all projects involving federal financial assistance. The act requires that certain relocation funding be provided when a resident’s property is acquired, such as reasonable out-of-pocket moving expenses and relocation advisory services. The relocation funding also includes payments to cover rent increases or downpayments on home purchases in order to assist tenants and owners in relocating to comparable housing, which, at a minimum, is decent, safe, and sanitary. A number of federal government agencies have acquisition programs where the federal government acquires title to the land through proceedings in federal courts. However, this report focuses on land acquisitions by state or local governments, or their designees. Officials from national organizations, states, and cities with whom we spoke cited various common public purposes for which eminent domain can be or has been used, but the lack of data precludes a determination of the extent to which eminent domain has been used across the nation. Purposes for which we received examples include the building or expansion of roads and other transportation-related projects; construction of state and municipal facilities; and the elimination and prevention of blight. In addition, officials from some of the national organizations we contacted, which represent state and local governments, property rights groups, urban planning, and home builders, also cited remediation of environmental contamination and economic development. Although we were able to identify some purposes for which eminent domain can be and has been used by certain authorities, we were unable to determine the number of times and the purposes for which eminent domain has been used across the nation because of a lack of centralized or aggregate data. According to representatives from some national organizations representing state and local governments, property rights groups, farmers, and planning professionals, and state departments of transportation (DOT) and city officials, eminent domain could be and has been used for various purposes. In particular, many of these representatives and officials said that eminent domain was sometimes needed for the completion of transportation-related projects, such as the building or expansion of roads and highways. As an example, according to Texas DOT officials, from November 1996 through March 2005, the department invoked eminent domain to acquire 6 of the 26 properties needed to assemble land for the construction of an interchange that connected two major highways in central Texas. These officials explained that most of these acquisitions involved the taking of a small portion of the property (partial takings). Furthermore, Texas DOT officials said that because they were making improvements to existing highway facilities, the location of such improvements was limited to properties adjacent to the highway. In addition, Florida DOT officials told us that the department used eminent domain in 1998 and 1999 to acquire 23 of 51 properties, most of which were partial takings, needed to reconstruct and widen an existing roadway from two to four lanes. City officials we contacted also provided examples of transportation-related projects in which eminent domain was used. For example, an official from a city in Texas told us that the city, in collaboration with the city’s transit authority, used eminent domain to acquire 2 of the 9 commercial properties needed to assemble land for the expansion of the city’s light rail system in October 1998. According to this official, the city’s transit authority was seeking to extend its existing light rail system to provide a low-cost and energy-efficient means of mass transit for commuters. Another purpose for which eminent domain can be or has been used is the construction or maintenance of state and municipal infrastructure, such as state and municipal buildings. For example, in January 2002, Los Angeles used eminent domain to acquire 2 of the 7 properties needed to assemble land for the construction of a public building that eventually accommodated state and city departments of transportation. In addition, officials from some of the national organizations we contacted said that eminent domain is also used for public utilities. For example, New York City used eminent domain to assemble land for the construction of a tunnel for the city’s water system. To complete one phase of the project, the city used eminent domain to acquire 3 of the 10 properties needed to construct support facilities for the operation and maintenance of the water tunnel. Furthermore, the city condemned subsurface rights on more than 1,100 properties for the construction of the Manhattan portion of the tunnel and approximately 640 additional subsurface rights for the Brooklyn and Queens portions. According to a New York City Department of Environmental Protection report, the tunnel is expected to enhance and improve the city’s water system and allow for inspection and repair of the city’s existing tunnels. In addition, an official from a county in California provided information about the condemnation of 40 parcels of property in June 2001 to assemble land for a flood control and protection project, most of which were partial takings. According to this official, the flood control and protection improvements were intended for public safety and public infrastructure protection. Eminent domain also can be and has been used to eliminate or prevent blight. For example, according to an official from a community redevelopment agency in Florida, the agency used eminent domain in March 1998 to acquire 3 of the 39 parcels needed to eliminate slum and blighted conditions, stimulate private investment in the area, provide commercial opportunities, and enhance the area’s tax base. This agency official said that the redevelopment of the area consists of commercial space and residential housing and was the first significant private investment made in the area in decades. In addition, New York City officials provided an example in which the city condemned property through eminent domain to eliminate blight. According to city officials, the city acquired 407 parcels to eliminate blight by constructing a major housing development. The city’s plan for the project indicated that the project was intended to accomplish several things, including providing new and rehabilitated housing for low-, moderate-, and middle-income residents and strengthening the tax base of the city by encouraging development. Furthermore, officials of some national organizations representing state and local governments, property rights groups, planners, and home builders said that eminent domain can be used for brownfield remediation, which is the environmental cleanup of property that is or may be contaminated. According to officials from an organization representing local government environmental professionals, oftentimes development of certain brownfield properties only occurs with the use of eminent domain because of the owners’ unwillingness to transfer property or allow access for site inspections for fear of later being held liable for clean-up costs. Although the officials from the national organizations mentioned above also cited brownfield remediation as a purpose for which eminent domain could be used, we were unable to obtain sufficient project information to conduct any further analysis or provide examples in this report. Finally, officials from some of the national organizations with whom we met cited economic development as a purpose for which eminent domain can be and has been used. However, according to an official from a national organization representing city governments, the use of eminent domain solely for economic development purposes is minimal compared with the use of eminent domain for other purposes, such as transportation-related projects. Officials from some authorities that have the power to use eminent domain said that some of their projects might be linked to economic development, but that economic development was not the primary purpose of the projects. In addition, all of the projects we reviewed in which eminent domain was used to eliminate blight were associated with projects intended to improve the economic condition of the area. For example, as we have previously described, the redevelopment agency in Florida used eminent domain to acquire three parcels of property to eliminate slum and blighted conditions by stimulating private investment in the area, providing commercial opportunities, and enhancing the area’s tax base. Officials from an organization representing state legislatures said that economic development is closely related to blight removal because authorities with eminent domain power may claim that blight removal will stimulate the community’s economic conditions. In addition, representatives from some national organizations representing state and local governments, planning professionals, and officials from some cities we visited said that transportation-related projects might lead to an area’s economic development. For example, New York City officials said that even acquisitions of property by eminent domain that are not primarily intended for economic development, such as the construction of a road or highway, would likely improve the economic condition in the area because of the improved access to businesses in the area, potentially increasing the profitability of the businesses. City officials from Chicago and Los Angeles told us that the construction of state buildings in their downtowns had positive economic impact on their cities because the projects attracted private development. Finally, an official from Denver Urban Renewal Authority described the Authority’s use of eminent domain to assist a developer complete refurbishing of a downtown property of architectural and historical significance, thus preventing the property from becoming vacant and potentially having a negative impact on its surrounding area. We also obtained data on the use of eminent domain from selected state DOTs and local authorities. The data reflect that the amount of eminent domain activity and purposes for which eminent domain was invoked varied by states and localities. Officials from 9 state DOTs we contacted estimated that the number of individual properties they used eminent domain to acquire in the last 5 years for transportation-related projects ranged from approximately 200 to 7,800. As we previously discussed, according to the state DOT officials, because most of their projects involve improvements on existing transportation systems, the majority of the private properties they assembled for the projects consisted of partial acquisitions. In addition, according to information provided by Baltimore and Los Angeles city officials, Baltimore invoked its eminent domain power most commonly to assemble land for urban redevelopment projects that involved blight removal, while Los Angeles invoked its eminent domain power most often for street improvements projects. Similarly, according to New York City officials, the city invoked its eminent domain power most commonly to assemble land for parks and street widening. Officials from Chicago and Denver told us that they do not have complete data on the number and purposes for which they used their eminent domain authority, but provided us with some information on their use of eminent domain. Specifically, City of Chicago officials estimated that they acquired 2,000 parcels through eminent domain in the last 10 years. In addition, officials from Denver told us that the city used its eminent domain authority mostly for street improvement projects. The lack of state or national data precluded objective statewide or national assessments on the use of eminent domain, including (1) how frequently eminent domain is used, (2) how often private-to-public or private-to-private transfer of property occurs, or (3) the purposes for which eminent domain has been used by state and local governments. Although we were able to collect limited data on the purposes and number of instances in which eminent domain was used, officials from some of the national organizations we contacted told us that state or national aggregate data on the use of eminent domain do not exist. At least two major factors account for the lack of aggregate data. First, officials from the U.S. Departments of Transportation and Housing and Urban Development, as well as the Environmental Protection Agency, told us that the federal agencies generally do not acquire private property through eminent domain directly, but may be indirectly involved through the different programs or agencies they administer or fund. Furthermore, officials from these Federal agencies told us that they do not formally track whether program participants use eminent domain. Second, the lack of state data on the use of eminent domain may result from multiple authorities in a state having the power to invoke eminent domain and states not having central repositories to collect such data. As we have previously discussed, since states grant eminent domain authorities to local governments, which may further delegate this authority to a designee, such as a development authority, many entities have the power to invoke eminent domain. Of the 10 state legislative research offices we contacted, 5 provided us with information on the authorities that have eminent domain power within their states. For instance, according to information provided by the Virginia legislative research office, at least 40 different types of authorities can invoke eminent domain, including school board districts that can use it to acquire any property necessary for public school purposes. The legislative research office of Massachusetts listed 8 different types of authorities with eminent domain power. For example, the Armory Commission can use eminent domain to acquire land suitable for target practice ranges for the armed forces of Massachusetts, subject to the governor’s approval. In addition to the 8 authorities, the information provided by the Massachusetts legislative research office states that Massachusetts’ general statutes also grant the power to, among others, the governor and state council, county commissioners, and city aldermen. Furthermore, according to a Texas Legislative Council report, at least 90 different types of authorities have been granted the power of eminent domain in Texas, including agricultural development districts, railroad companies, and sports facilities districts. Finally, the legislative research offices of Illinois and Washington provided us with information on statutes that described the authorities that were granted eminent domain power. In particular, in Illinois, at least 168 types of authorities, including those dealing with transportation, such as the Chicago Transit Authority and the Kankakee River Valley Area Airport Authority, have the power to acquire property through eminent domain, and, in Washington, at least 78 types of authorities were granted this power. Public authorities at the state and local levels acquire property, including by eminent domain, through processes set forth in various federal, state, and local land acquisition laws and implementing regulations. Federal and state laws, such as the URA, outline how much compensation authorities need to pay property owners whose land is being acquired and also direct authorities on what type of relocation assistance to provide to residents and businesses. However, local and state officials we met expressed some concerns about certain limits that the URA places on the amount and type of relocation payments to displaced residents and businesses. In addition to local laws and regulations, federal and state laws establish procedures for how authorities must undertake land acquisition, including the use of eminent domain. Although multiple laws address land acquisition, authorities we interviewed follow broadly similar steps. When acquiring land, which may involve the use of eminent domain, authorities generally follow a four-step process: (1) project planning; (2) property valuation; (3) property acquisition; and (4) relocation of displaced property owners, residents, and businesses. Sometimes these steps overlap. Land acquisition laws generally require compensation be paid to the owner of a property that a public authority has acquired, including acquisitions by eminent domain. All 50 state constitutions require that just or fair compensation be paid to those whose property has been taken through eminent domain. Just compensation is a payment by the government for property it has taken under eminent domain, usually the fair market value, so that the owner theoretically is no worse off after the taking. As mentioned earlier, the United States Constitution stipulates that eminent domain use by a government authority must include just compensation to the property owner. Some state constitutions, including Georgia and Montana, provide for payment of expenses above the fair market value of the property such as, in certain circumstances, attorney’s fees or litigation expenses incurred in determining adequate compensation. The land acquisition process often includes relocation of either the property owner or residents and businesses located in the property acquired by the authority; federal and state laws also address the costs involved in relocation. Requirements in the URA, the federal law governing the provision of relocation benefits to displaced parties, are applicable to all acquisitions—including voluntary acquisitions achieved through negotiated settlements and acquisitions through eminent domain—of real property for federal or federally assisted programs or projects. The URA provides benefits to displaced individuals, families, businesses, and nonprofit organizations. The types of benefits provided depend on factors such as ownership, tenancy, and use of property (commercial versus residential use). Local officials told us that they have provided benefits under the URA such as: actual moving costs for residents and businesses; comparable replacement housing; rental assistance for tenants; cost of personal property loss for businesses; expenses in finding a replacement site for businesses; and reestablishment costs for businesses up to $10,000. In addition, some city and state officials with whom we spoke explained that their states have adopted legislation or policies with requirements similar to the URA, providing some or all of the same benefits to residents and owners displaced through nonfederally funded projects. However, local officials, and redevelopment agency officials from four of the five cities we visited believed that payment amounts allowable under the URA might not be adequate to cover costs. For example, we were told that a $10,000 cap on reestablishment costs for business relocation, unchanged since 1987, was too low. Most officials noted that reestablishments costs exceed this cap. For example, Chicago officials described high reestablishment costs such as, replacing specialized fixtures, licensing and permitting, and differential payments for increased rent, insurance, and other needs. Furthermore, a Los Angeles city official noted that the URA requires lump sum payments to remain under a $20,000 cap. Los Angeles officials use these settlements frequently, but one official stated that the URA cap was too low. Officials from 6 of the 10 state DOTs that we contacted remarked that various benefit limits in the URA are too low to properly compensate for business reestablishment costs. According to the U.S. Department of Transportation, the agency responsible for issuing regulations to implement the URA, the agency’s Federal Highway Administration (FHWA) has received comments about the inadequacy of business reestablishment payments under the URA from states, other federal agencies, and affected businesses. In response to these comments, FHWA undertook multiple activities to identify needed programmatic change in the URA, according to FHWA officials. In particular, in 2002 FHWA conducted a study to assess the adequacy of current URA provisions for business relocations and found that reestablishment payments were largely considered inadequate. In 2005 FHWA made some revisions to the URA regulations, but the revisions did not raise the cap on reestablishment payments. Such an increase requires a statutory change. State and local laws further condition how land may be acquired, including through eminent domain (see fig. 1). Among the states that we reviewed, some states enacted additional laws concerning land acquisition, such as requirements for environmental assessments. For instance, according to City of Los Angeles officials, the California Environmental Quality Act requires that the environmental impacts of discretionary projects proposed to be carried out by public agencies, including in general publicly funded projects in the state involving land acquisition, be assessed at the earliest possible time in the environmental review process. In New York, according to city officials, when a significant adverse environmental impact is likely to result from a project, the State Environmental Quality Review Act requires an assessment in the form of an environmental impact statement of short and long term impacts, adverse environmental impacts, and mitigation measures. In addition, according to officials, residential and business displacement from a project is generally analyzed in the review conducted under New York State and New York City law. Some states have laws outlining how authorities granted eminent domain authority within their state can invoke this power to assemble land for public projects. For example, in Illinois, Article VII of the Code of Civil Procedure sets forth procedures for use of the power of eminent domain by state and local governments including provisions regarding the determination of property value, negotiation with property owners, and the initiation of condemnation. Provisions in the Illinois Municipal Code authorize municipalities to take property for redevelopment based on a blight designation. In New York, the Eminent Domain Procedure Law sets forth the procedure by which property is acquired and property owners are compensated. This law also establishes the opportunity for public participation in the planning of redevelopment projects, which may necessitate eminent domain use. Through these procedures, the state acknowledges that the need for public land acquisition should be balanced against the rights of private property owners and local communities, encourages the settlement of claims for compensation, and reduces related litigation. California’s Eminent Domain and Relocation Assistance Laws implemented by the Relocation Assistance and Real Property Acquisition Guidelines governs private property acquisition by a public authority not involving federal funds. The guidelines are designed to ensure equitable treatment for persons displaced from a home or business, reduce related litigation, and require comparable replacement dwellings. The Colorado Urban Renewal and Eminent Domain Laws contain procedures for using eminent domain to eliminate or prevent blight or slum conditions. To govern the relocation of displaced residents, Maryland, New York, and Washington, like California, have established laws that provide certain state relocation benefits. Therefore, a mixture of federal and state laws directs how local authorities use their eminent domain power, provide compensation, and other required benefits. In addition to the federal and state laws that authorities must follow when invoking eminent domain, some of the cities that we visited had additional local laws or city agency regulations that governed urban redevelopment, as well as relocation of displaced residents and businesses (see fig. 1). For example, in New York City, the Uniform Land Use Review Procedure Charter, approved in 1975, standardizes how applications affecting land use in New York City, including projects involving eminent domain, are publicly reviewed. Another law sets forth the rights of residential and commercial tenants displaced by urban redevelopment in New York City. The Los Angeles redevelopment agency has also established an appeals procedure for relocation decisions which is supplementary to federal and state law, according to information provided by Los Angeles city officials. The complexities associated with land assembly have led to numerous approaches for acquiring land and providing just compensation. However, when state and local authorities acquire land, either through negotiated purchase or eminent domain, they follow some common procedural practices. The land acquisition process generally occurs in four stages, including (1) project planning; (2) property valuation, during which appraisals are conducted; (3) property acquisition; and (4) relocation, during which authorities may provide residents and businesses replacement housing or commercial property (see fig. 2). Sometimes these stages are concurrent, with some variation across the localities we visited. The views that property owners and property rights organizations we interviewed have on these stages are discussed in a later section of this report. The project planning stage may begin by identifying the need for a project. Depending on the type of project, city departments of engineering or planning, city redevelopment or renewal authorities, or state departments of transportation with whom we spoke, conduct work at this stage. For example, 23 U.S.C. § 135 (section 135), as amended by the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users, mandates that states carry out a statewide transportation planning process that involves both a long-range statewide transportation plan, which identifies transportation needs over roughly a 20-year horizon, and a Statewide Transportation Improvement Program (STIP), which is a listing of potential projects to be constructed in the near term, covering a 4-year period. FHWA and the Federal Transit Administration jointly administer the statewide planning program. During these planning processes, according to FHWA officials, state DOTs work with other state agencies and local authorities within a cooperative, continuous, and comprehensive framework to make decisions on the need for new state highways or interchanges, among other transportation-related public improvements. Section 135 requires public notice during the planning process, which for the long-term plan includes public meetings at convenient and accessible locations at convenient times, use of visualization techniques to describe plans, and provision of public information in an electronically accessible format, such as the Internet. The STIP also requires states to provide interested parties with a reasonable opportunity to comment on the proposed program. According to state DOT officials in New York, project managers will attend local board or council meetings before a design for a new transportation project is proposed. After the project proposal, New York officials hold informational meetings for property owners and allow time for individual question and answer sessions. New York officials consider alternative site selections proposed by the property owners, although the state DOT eventually selects the least intrusive and safest alternative by weighing social, economic, safety, and technical considerations. Other states that we contacted, including Missouri, Illinois, California, Colorado, and Texas, also described their adherence to the federal requirements in conducting their statewide transportation improvement plans and providing public notice of the project design process. In cities or localities, the project planning stage may generally involve developing, publicly vetting, and approving a project plan by a public body, such as a city council. Redevelopment where eminent domain may be used in the five cities we visited may involve the creation and approval of an urban renewal or redevelopment plan, which establishes such things as the need for the project, lists the parcels required to complete the project, and creates a timeline. In some localities, such planning processes may involve the completion of impact studies of the potential effects from the proposed redevelopment project on the neighborhood and the environment. Multiple public hearings or meetings may occur when localities are vetting a redevelopment plan. Chicago officials told us that the public may attend hearings or meetings held by the city’s planning department, city council, and an appointed body known as the Community Development Commission, at which redevelopment plans and takings are approved. In addition, local alderman may also sponsor public meetings on proposed redevelopment plans. In New York City, the Uniform Land Use Procedure Law provides for review before four city entities: the local community board, borough president, city planning commission, and the city council. Property owners and the community, in New York, Chicago and in other localities, are notified about hearings through letters sent to their mailing addresses. This planning process often ends with the approval of a project plan by a public body. In all five cities we visited, officials told us that the city council approves the redevelopment or urban renewal plan, at times granting the appropriate public authority the specific power to acquire properties necessary to complete the project. Sometimes the development of these plans involves organizations outside the local or state government, such as community groups or developers. Officials from some of the cities we visited explained that the city may work with the developer by exercising its power of eminent domain to complete the site assemblage necessary for a developer’s project. This collaboration typically occurs after the developer has acquired as many parcels in a redevelopment site as it can through private market transactions. During project planning, city authorities often may have to demonstrate blight or slum conditions in the area slated for redevelopment. States allowing the use of eminent domain for blight removal generally establish criteria to determine blight. These criteria may consider conditions of blight that impose a physical or economic burden on a community. Examples of physical blight in some state laws include buildings in which it is unsafe or unhealthy for persons to live or work. Indications of physical blight may include building code violations, structural dilapidation and deterioration, defective building design or physical construction, or faulty or inadequate utilities. Blight also may include neighboring or nearby property uses that are incompatible with one another and prevent the economic development of the respective parcels, such as the existence of irregularly sized lots. Depreciated or stagnant property values, high vacancy or turnover rates of commercial property, or increased abandonment of buildings and lots can be indications of economic blight, as can high crime rates or residential overcrowding. While state laws often determine blight factors, authorities may have some latitude in applying them to properties and areas. The City of Chicago, following Illinois law, must apply a 13-factor test to determine blight for a redevelopment project area. To classify an entire area, such as a city block, as blighted, five or more of the factors must be clearly present and reasonably distributed throughout a project area. City officials explained that this standard means that at least a third to one half of the properties in a designated area meet at least 5 of the 13 blight factors. Officials in Los Angeles informed us that in order to adopt a redevelopment plan an area must generally be characterized by one condition of physical blight and one condition of economic blight. According to officials at the Denver Urban Renewal Authority, in order to undertake any redevelopment project, a blight designation must precede any redevelopment action. In addition, the officials explained to us that early in the project development stage, the authority conducts a study, pursuant to Colorado state statute, to determine that a minimum of 4 of the 11 blight characteristics in state law are present in the designated area. These criteria include unsanitary or unsafe conditions, deteriorated or deteriorating structures, environmental contamination, and the existence of conditions that endanger life or property. The property valuation stage may involve title studies and property appraisals that city, state, or contract appraisers often conduct. Several state and city officials with whom we met or spoke described the need to conduct title studies to determine legal ownership of a property and ascertain any lien holders. To determine the fair market value of the property, which is generally the amount of the first offer made by public authorities, city officials described using an independent, certified appraiser. According to officials in New York City, fair market value is determined by valuing the highest and best use of the property on the date of acquisition. In Los Angeles, city officials explained that state law defines fair market value as the highest price that a willing buyer and willing seller would agree to, neither being compelled to buy or sell and each having full knowledge of all of the uses, and restrictions on use, to which the property may be put. In other words, officials from the Los Angeles authority are required to pay owners not less than the amount for which their property would sell privately on the open market if it were unaffected by a possible eminent domain action. Massachusetts Highway Department officials described having all appraisals exceeding $175,000 in value reviewed by a real estate review board appointed by the state’s transportation commissioner for accuracy and then submitted for final approval to the transportation commissioner. Some transportation authority officials also described using in-house appraisers at their agencies. During this stage, owners also may obtain appraisals of the fair market value on their property, although sometimes at their own expense. The property acquisition stage may involve a formal offer, negotiation by the city, state, or redevelopment authority officials, and at times, an impasse leading to an eminent domain filing by an authority’s legal counsel. Multiple authority officials described using eminent domain after many attempts at a negotiated settlement had been unsuccessful. If the owner does not agree with an authority’s initial offer, then some authorities may provide additional offers above the appraised value. In some localities, this sort of negotiation involves the owner identifying special circumstances that justify a higher level of compensation. Denver authorities told us that their initial offer to purchase is typically based on an appraisal. Any settlement that can be reached at the midpoint between the city’s appraisal and the property owner’s appraisal when the latter is higher is considered an appropriate settlement. The Denver official stated that it is the city’s practice to pay more than the fair market value on the property to compensate for inconvenience or intangible difficulties caused by condemnation. When seeking a negotiated settlement, the authorities we contacted had different limits on the percentage amount over the appraised value that they could offer prior to invoking their power of eminent domain. For example, the Community Redevelopment Agency of Los Angeles cannot make an offer of over 120 percent of the appraised value of the property without agency board approval. A higher offer by the redevelopment agency may be considered a gift of public funds, which the agency, by law, cannot make, according to officials. In New York City, based on agency protocols, the Department of Citywide Administrative Services may pay no more than 110 percent of the original appraisal prior to the use of eminent domain. Similarly, the city’s Department of Housing Preservation and Development has established rules to pay no more than 120 percent of the original appraisal prior to the use of eminent domain. In Chicago, a city official estimated that within 1 year, 75 percent of owners settle at an amount between 100 and 150 percent of the original offer. Once authorities are certain that the owner will not settle or that the legal owner cannot be located, they may file to condemn the property with eminent domain in the appropriate court. However, the manner in which authorities can invoke eminent domain differs. For example, two state DOTs we contacted have established policies to invoke eminent domain for each acquisition undertaken, including acquisitions involving willing sellers, to ensure that the authority is the sole legal title holder on the property. Multiple cities and state departments of transportation told us they also had the statutory authority to use a procedure known as “quick- take,” which refers to the ability to petition a court for immediate vesting of a property’s title. If the petition is granted, the court transfers the property to the authority and the final compensation is determined at a later date. The authority must deposit the estimated compensation with the court, which owners may withdraw without relinquishing their ability to argue for more compensation. Local officials have noted that for most eminent domain filings, the authority and the owner come to a settlement without the need for a trial. For instance, officials from three authorities we contacted estimated that 90 percent of all eminent domain filings were settled prior to trial. Although few eminent domain cases go to jury trial, authority officials stated that eminent domain is the most effective tool they have to acquire needed property from owners who hold out for a higher purchase price or refuse to sell. Officials in one city explained that they also use eminent domain to void leases on property while other officials explained that they use it to obtain abandoned property when no owner can be located. For example, city officials with whom we spoke stated that eminent domain is needed to acquire properties from owners that purchase and hold on to property after an area is slated for redevelopment. Officials stated that they generally believe these owners are speculating that land values will increase because of the expected public investment in the redevelopment project. The relocation stage may involve outreach by the condemning authority and the provision of relocation benefits by agency or contracted relocation specialists to displaced residential or commercial owners or tenants. For instance, New York City defines displaced party as any family, individual, partnership, corporation, or association that is displaced or moves from real property, or who moved his or her personal property from such real property, on or after the date of acquisition of the real property for a public improvement or urban renewal site or project. The URA’s definition of a “displaced person” covers anyone who moves because they received a written notice that a program or project undertaken by a federal agency or with federal financial assistance intends to acquire his or her property (including a rental property). Some authorities, such as the cities of Los Angeles and Chicago, have dedicated offices within the condemning agency to manage the provision of relocation benefits. Other localities, including New York City, sometimes contract out this responsibility to private relocation firms, for example when undertaking larger projects involving multiple displaced parties. Multiple relocation specialists with whom we spoke, whether they were authority officials or contracted specialists, reported contacting the property owner as soon as the public entity received the authority to take the owner’s specific property or soon thereafter and providing relocation support for the duration of the settlement or condemnation. For example, Chicago officials told us that within five days of the city’s first offer letter, relocation specialists will contact the property owner and tenant to set-up a face-to-face interview to determine their needs. Relocation specialists may meet with displaced residents at numerous steps of the land acquisition process. They may explain the residents’ rights, benefits, and obligations and may interpret legal notices received from the authority. According to some relocation specialists, residential tenants and owners are to be relocated to comparable replacement housing that is decent, safe, sanitary, and functionally equivalent to the displaced dwelling. Relocation specialists from two localities described making every effort to house residents in neighborhoods of their choice, including their current neighborhood if possible, and finding rental housing for residents who were renters. City officials from four of five cities we visited showed us new residential apartment buildings, one of which included services, such as child care and computer centers, into which they moved displaced residents. For business occupants, relocation specialists may conduct comprehensive analyses of the business’ location requirements, fixtures, moving costs, and other relevant considerations to find a comparable site for business relocation. In one city we were told that relocation specialists work with the business owners to address all commercial issues, including negotiating all comparable square footage costs and rent and getting the same phone number transferred to a new location. Some relocation specialists are associated with local retail and office landlords and attempt to negotiate a price which, combined with relocation funding under the URA, initially can keep the rental costs similar to the previous location. According to all of the relocation specialists who we interviewed, relocated commercial occupants generally have done better financially in other, more economically stable neighborhoods. Relocation benefits under the URA and many local and state laws include some or all of the following payments to residential and commercial tenants: Actual moving expenses, which may include packing and moving expenses, storage of personal property, the cost of dismantling, disconnecting, and reconnecting machinery and utilities, loss of personal property caused by the move, the expense of searching for a substitute business site, moving insurance, advertising related to the move, or other related expenses (or a fixed moving allowance in some locations); Compensation over the acquisition cost of the property for an owner to purchase a comparable replacement home, pay increased mortgage costs, or pay closing costs; For tenants, a monthly rental subsidy to rent a comparable dwelling for a period of 42 months that is equal to the differential between what the tenant was paying at the displaced dwelling and the payment at the comparable dwelling (many localities also allow this payment to be made in a lump sum so that renters may use it as a down payment to purchase a home); and A payment in lieu of moving and related expenses in nonresidential moves, which may be made to a commercial owner when relocation would result in substantial loss of business. For selected projects where eminent domain was used that we reviewed or visited, authorities described the previous conditions of the selected areas and they told us or we observed some of the benefits realized by communities after the projects were completed. Examples of benefits to the community included increased job opportunities and modernized or safer infrastructure. Property rights groups told us about the negative effects that the use of eminent domain could have on property owners, community residents, and businesses, such as the loss of small businesses or the dispersal of residents who relied upon each other in informal networks. In addition to the losses to the community, the property rights groups noted that the manner in which authorities implement procedures for using eminent domain also affects property owners. For example, national and local property rights groups identified problems with how some authorities communicate with property owners, designate areas as blighted, and value property. The use of eminent domain generates benefits and costs that could affect various parties—such as property owners, businesses, authorities, and city officials—whose interests may diverge. The great variety in benefits and costs makes it difficult to establish objective measures to examine the overall impact of projects involving eminent domain. In addition, the lack of aggregated data on the purpose and frequency of eminent domain use further limits this effort. However, for selected projects where eminent domain was used that we reviewed or visited, authorities described the previous conditions of the selected areas and they told us, or in some instances we observed, some of the benefits realized by communities after the projects were completed. Prior to condemnation, according to local and state officials, a variety of conditions existed in selected areas in which eminent domain was used. For example, according to city officials, some of the urban areas slated for redevelopment included buildings in substandard condition. Many buildings were vacant or abandoned with few or no improvements made for multiple years; some properties had missing window glass, collapsed roofs, accumulated debris on the parcel, and other conditions that created a public health hazard. However, in some cases that we reviewed, authorities acquired occupied residences and operating businesses to redevelop an area. In one area, a building occupied by long-standing businesses providing retail services to the neighborhood was under threat of condemnation by eminent domain. Although this building was not unusually dilapidated, it was within a redevelopment area designated as blighted, and thus subject to acquisition by eminent domain. According to local and state officials, road conditions in some projects reviewed included inadequately sized or dilapidated streets, sidewalks, or curbs. Traffic flow and access in some neighborhoods were poorly planned. For example, industrial traffic reportedly moved through residential areas in one project we reviewed. In other road or highway projects, according to state transportation officials, conditions included operable, but older roads requiring modernization, such as new interchanges to better handle traffic. Other roads required new safety features, such as turning or deceleration lanes, or straightening of tight curves in the road. We also reviewed other types of infrastructure projects, such as the New York City water tunnel previously discussed. According to city officials, the condition of the original water tunnels servicing the metropolitan area was questionable because they had not been inspected since being built in the early twentieth century. Condemned property is often redeveloped as part of a larger redevelopment or improvement project. City officials considered outcomes of these projects as benefits to the community, and emphasized that they could not have completed the projects without the use of eminent domain. However, authorities told us they often obtain much of the land for projects, including urban redevelopment projects, transportation projects, utility projects and others, through negotiated purchases and condemn a small number of the needed properties. Therefore, benefits to the community cannot be attributed solely to the use of eminent domain and are more likely the result of the redevelopment projects for which eminent domain was used. According to local and state officials and based on some of the projects we observed, the redeveloped areas have a variety of characteristics. In urban areas, redevelopment led to additional housing stock (including affordable housing set asides), new commercial centers with additional local job opportunities, reduced crime in some areas, and modernized infrastructure. For example, in Chicago, the downtown redevelopment of a sparsely occupied block produced a 27-story municipal building, which city officials described as fully leased with retail stores and office space, including a parking garage and a mass transit station serving many parts of the city, including both airports. In New York City, the Department of Housing Preservation and Development used eminent domain to assemble land for the Melrose Commons project in the South Bronx. The agency is working with several private and nonprofit developers to construct over 3,200 affordable housing units to turn what a high-level official characterized as one of the most blighted areas in the city into a thriving neighborhood. Officials cited benefits from transportation projects that include safer, more efficient roadways and traffic patterns. In Los Angeles, the widening of a street from two lanes to four lanes with center left turn lanes alleviated what officials described as perennial congestion, provided additional parking, and reduced accidents on a major artery in the western part of the city. Additional improvements resulting from this project included new curbs, gutters, street lighting, traffic signals, sewers, and storm drains. City officials cited other types of improvements resulting from redevelopment, such as less contaminated land and new public green space or parks. According to Baltimore officials, sometimes vacant lots are acquired and provided to community groups for gardens. New York City officials explained that eminent domain could be an important tool to acquire brownfields in the city for remediation, although authorities there have yet to do so. Much of the 581 miles of waterfront in New York City has been contaminated in the past. According to officials, many developers are not interested in developing contaminated waterfront properties because they do not want to be liable for cleaning up the contamination. Property owners also may be unable or unwilling to sell properties that are or may be contaminated; thus, the city could acquire the properties through eminent domain, decontaminate them, and put the land to public use. Property owners, property rights groups, and national community-based organizations described a number of negative effects from using eminent domain. For example, properties acquired through eminent domain may remain unused for some time, according to city officials and a property rights group. As an example, in downtown Chicago in 1989, the city condemned 16 improved, occupied buildings (one with historic landmark status the city had removed prior to condemnation) for a two-tower office and retail development. Because of a downturn in the local real estate market, the proposed project did not begin. However, according to Chicago officials, a $500 million development is now under construction on the long vacant land. In another example, Los Angeles acquired an industrially zoned parcel through eminent domain to build an animal shelter. According to city officials, to preserve the parcel for commercial use, the city is considering an alternate site for the animal shelter. As a result, the condemned property remains unused to date. In both of these instances, the cities expended public funds acquiring the land, including legal costs associated with invoking eminent domain. Property rights groups and one national community organization further noted that certain costs to communities may not be compensated when eminent domain is used. These issues include the dispersal of residents in low-income communities to other neighborhoods or cities. The residents of low-income neighborhoods may rely on one another for day-to-day needs such as child care, according to the community organization. If these residents lose their homes through eminent domain and are relocated to new areas, then some of the resources upon which they depend also can be lost. Property rights and community groups added that owners also suffer emotional costs when losing a home. Making people leave their homes can be destabilizing to individuals or families even when relocation costs are provided. Property rights groups also noted other community impacts, such as rent destabilization in neighborhoods affected by eminent domain and a reduction in an area’s affordable housing stock when units are acquired and replaced by commercial developments. Other potential costs to the community that the groups mentioned include reductions in homeownership and the number of small businesses in an area. Furthermore, according to one property rights group, there is a tendency for cities to use eminent domain to remove manufacturing companies and replace them with retail businesses to collect increased sales revenue. However, removing manufacturing companies may have a negative effect on the community because it decreases the number of manufacturing jobs that are available. The procedural requirements we previously described could provide some safeguards for property owners, such as ensuring that they receive timely public notice and just compensation. However, the effectiveness of the procedures depends on how well they are implemented by the authority invoking eminent domain. Property owners and property rights advocates we interviewed identified problems with how some authorities communicate with property owners, designate areas as blighted, and value property. Property rights advocates also expressed concern that owners may not fully comprehend the benefits available to them when an authority acquires their property. Multiple owners and property rights groups with whom we met reported receiving little advanced, misleading, or no notice of public hearings or proposed condemnation actions by the relevant authority. These problems may prevent owners from voicing concerns about the proposed acquisition of their properties. For example, property rights groups in Los Angeles told us that many owners do not receive the statement of interest-owner participation letter that the authorities told us they send to all owners during the planning stage of each project. Property rights groups in Denver and New York said that notice was posted on signage, but not sent in a letter. According to the Denver group, the method of posting a notice at one site would not disseminate information about public hearings to most owners in a community. In another locality, the public notice that property owners received was reportedly not clear. For example, one authority sent a notice informing the owners of the redevelopment project and their responsibilities in a format that some owners confused with junk mail; it did not resemble an official letter. Finally, in Denver, property rights advocates told us that owners need notice earlier in the process. They said that owners learn about the condemnation after the initial planning has occurred and the urban renewal area has been designated. However, authorities in cities we visited consistently said that they always sent notice to owners of hearings—which give affected property owners multiple opportunities to voice concerns about the proposed plan and potential property acquisition—and sent notice of acquisition activities as required in all applicable laws and regulations. Even when notice is received, owners may not have the financial or technical ability to fully comprehend what actions an authority is taking, what recourse they may have, or where to go to for assistance in understanding the proceedings or terms mentioned in the notice. For instance, one authority sent a statement of interest-owner participation letter to property owners stating that a redevelopment project was proposed for their area. The letter states that owners may, within 30 days, propose their own alternative plan for redevelopment of the area. However, property rights groups explained that most owners do not have the money or skills needed to develop and execute a redevelopment plan. On the other hand, officials in this locality explained that multiple public funds and technical assistance were available to help owners formulate alternative business development plans. The letter of intent, officials said, provided the owners needed information about how to access these public benefits, remain in the community during redevelopment, and ultimately benefit from the project. One local organization involved in urban redevelopment explained that local public hearings and the voting on proposed project plans (which may provide authorities the power to take property) by governmental bodies, such as city councils, occurred on different dates. Of concern was that the votes would happen without public attendance, thereby reducing the transparency of the process. Furthermore, a concern was raised about the time owners had to speak at hearings. In one locality, each owner was reportedly allowed only three minutes to address the elected body that would decide to approve or deny the project plan in which eminent domain might be used. To facilitate better communication between property owners and government authorities looking to assemble land, some states, such as Utah, have established a Property Rights Ombudsman’s office. According to the current official in Utah, the ombudsman is an attorney hired by the state as an independent source of information and assistance for property owners and others involved in the acquisition of property for public projects. The ombudsman, who provides services free of charge to owners, can mediate disputes, arrange for arbitration, order appraisals, and provide information to property owners and governmental authorities acquiring land. Connecticut and Missouri reportedly have recently adopted statutes creating property rights ombudsman-type offices. Many property rights groups and owners with whom we spoke were critical of blight designation processes in their localities. They said that nonblighted property parcels may be designated blighted because of factors such as design flaws, high density, turnover of occupants, and irregularly shaped parcels. According to some property rights groups, by these criteria almost any property or area in question may be considered blighted. They felt that blight should be defined narrowly based mainly on public health and safety risks from a specific property. According to officials from one national organization, farmland may be wrongly designated as blighted. Many farms have older and what may appear to be dilapidated homes and barns, or old storage sheds and tractors, which makes the property especially susceptible to a blight designation. The officials added, however, that these buildings and machines are often fully functional or operable, meet housing or farm needs, and pose no public danger. In the projects we reviewed where eminent domain was used to remove blight, blight was almost always designated by area (such as a city block) rather than by parcel. Owners and property rights groups opposed to this practice stated that nonblighted property can then be taken based on this area-wide designation. During the project planning stage, usually for projects that are considered urban redevelopment or blight removal, authorities designate the physical boundaries of areas selected for redevelopment and determine the presence of blight in the area. This designation is often then applied to all parcels in the area which, in turn, allows authorities to acquire any property in the designated area. Property owners and community groups argue that not all property in such areas is blighted; rather, many properties are improved and occupied. Furthermore, we were told that the planning stage and blight designation can occur years before an authority is able to commence acquisition and construction in the area. For example, one area we reviewed initially was deemed blighted in 1986. The blight designation, and with it the threat of eminent domain, destabilized property values in the neighborhood for nearly 20 years, according to one owner. Although the area has been an official redevelopment area since 1986, local officials told us that state redevelopment law limits a blight designation to 12 years. The authority is then required by law to return to the deciding elected body to again prove blight before the authority is able to move forward with the project. Property rights groups also expressed concerns that blight may be exacerbated by the redevelopment activity and has been termed “developer blight”—that is, the physical decline of a parcel or area, such as a city block, once a redevelopment project has been announced. For example, in Denver, a property rights group told us that it is difficult to isolate the causes and effects of blight in their area because once an area is designated as blighted its decline might hasten. The public knowledge of the impending redevelopment and related property acquisition, according to one concerned group, can cause property values to fluctuate and discourage property owners from maintaining their dwellings or businesses or, in other words, cause an area to become blighted. In one neighborhood, according to a local property rights group, improved residential buildings were largely occupied and multiple businesses were open prior to the announcement of a redevelopment project. However, once the project was announced and the authority began the project design and planning stage, the developer purchased many of the properties and over time, failed to maintain them properly. This activity, according to the property rights group, constituted developer-initiated blight in the neighborhood. Remaining owners are concerned that “developer blight” has reduced their property values and that they will not receive what they consider just compensation from the authority as the project proceeds. Another group suggested that redevelopment plans and blight designations may prevent new businesses from relocating to a neighborhood that was revitalizing on its own because of the public’s awareness that authorities will have the power to use eminent domain in the area. Authority officials told us that areas they seek for redevelopment are not revitalizing on their own, but rather declining and becoming further blighted. While property valuation is intended to provide property owners compensation at fair market value for their property, property rights groups and owners expressed concern about the reasonableness of property appraisals. Multiple property rights groups believed that localities undervalue property and make offers lower than owners would receive on the market. One group cited large differentials between final jury awards and first appraisal amounts in cases in which owners challenged a condemnation. Owners in this property rights organization who challenged initial offers reported receiving an average of 40 percent more in compensation than the initial offer. Conversely, officials of the local authority claim that it would be to their detriment to make an unreasonably low offer at any stage in the negotiation process because an offer not in good faith might enable a jury to award additional damages to a prevailing owner. Some believe that property is undervalued because of when appraisals take place. In New York, one owner, attempting to remain in his home, stated that if he were eventually required to sell his property, it would be appraised long after all other neighborhood owners had settled and moved away. With most of the neighborhood acquired, the owner believed that, should he lose his bid to keep his property, the value of his property would be lower than when the neighborhood was fully occupied. One state mediator of property disputes explained that an approved redevelopment area creates a hardship for owners, which is exacerbated when the project construction date is unknown. Owners in this case may have a more difficult time selling their property on the open market because it is within a redevelopment zone and subject to eminent domain. On the other hand, in one city we reviewed, buyers actively sought property in areas slated for redevelopment because the prospect of an authority acquiring the property was high. Property rights groups also noted that property and business owners may be uninformed about the benefits provided to them once their property is taken by eminent domain. In Denver, a property rights group stated that owners did not always realize that money was available for relocation benefits. In other localities, property rights groups noted that owners might have known that some financial support was available, but might not have been aware of the range of benefits. However, property rights groups also stated that acquisition and eminent domain can cost business owners more than the amount compensated for under the URA or state and local relocation regulations. For example, the URA may often only partially cover expenses related to either lost inventory or transferring inventory to the new location. Moreover, businesses are not compensated for lost goodwill or for loss of business attributable to the new location under the URA. Multiple property rights groups further explained that owners often are unable to fight a condemnation action if they want to retain their homes or businesses or seek additional compensation because costs related to hiring an appraiser or attorney, as well as court costs, are too high. Property rights groups believe that many owners sell their property under the threat of condemnation when they otherwise would not do so because they cannot afford to fight the action, something which can take several years. In New York City, a contested condemnation can take more than 10 years to settle, according to city officials we interviewed. Authorities counter that, under certain circumstances, there is money available to owners to fight eminent domain. In some localities, authorities can use quick take, in which the authority obtains the title of the property and deposits the estimated compensation with the court. Owners, authorities note, can withdraw these funds to challenge the authority’s valuation of their property. However, a property rights group and a state mediator emphasized that the owners cannot use these funds to dispute the authority’s right to take the property. Challenges to the right to take must typically be made and heard prior to quick take procedures. According to one national organization, partial condemnations of farmland do not always result in just compensation. If authorities were to take only a portion of a farm and that portion ran directly through the middle of the property, the owner’s business could be negatively affected. For example, one state reportedly developed a toll road that ran through the middle of a farm property. The farmer was paid the value of the land taken by the authority, but according to this organization, the damage done to the farm’s business was not compensated. The road reduced the farm’s crop yield, forced the farmer to maintain equipment on both sides of the walled toll road, and necessitated the costly alteration of an irrigation system. Numerous states have adopted at least one of three general types of changes to their eminent domain laws since June 2005. In particular, some states amended their eminent domain laws and placed restrictions on the use of eminent domain for economic development, increasing tax revenues, or transferring condemned property from one private entity to another. Other states revised their eminent domain procedures or added requirements. Finally, some states defined or redefined key terms related to the use of eminent domain, such as blight or blighted property, public use, and economic development. Several states had ballot initiatives on constitutional amendments to restrict current eminent domain laws. In addition, some states, including those that did and did not enact any changes, commissioned studies on their state’s eminent domain laws. After the Supreme Court’s Kelo decision, 29 states enacted at least one of three general types of changes to their eminent domain laws from June 23, 2005, through July 31, 2006. These changes include placing certain restrictions on the use of eminent domain, revising procedural requirements, and defining or redefining key eminent domain terms. While at least 3 of the 29 states specifically made reference to the Kelo decision in connection with their legislation, other states stated that the legislation was enacted to protect property rights and limit eminent domain use. Figure 3 identifies the states that enacted changes and the types of changes they enacted to their eminent domain laws. According to our analysis, 23 of the 29 states enacted changes that placed restrictions, with certain exceptions, on the use of eminent domain for economic development, increasing tax revenues, or transferring condemned property to a private entity (see fig. 3). Specifically, some of these states prohibited the use of eminent domain to transfer private property to a private entity for economic development unless the primary purpose of the use was to eliminate blight. For example, both Alabama and Maine now prohibit condemning authorities from taking property in a nonblighted area for purposes of private retail, office, commercial, residential, or industrial development or use. In addition, Ohio imposed a moratorium, through December 31, 2006, on the use of eminent domain to take land within a nonblighted area when the purpose is economic development that leads to ownership of the property being vested in another private person. Furthermore, Florida prohibits the use of eminent domain to take private property for the purpose of preventing or eliminating slum or blight conditions. However, most of the states that enacted changes restricting the use of eminent domain for economic development, increasing tax revenues, or transferring condemned property to a private entity did make an allowance for the transfer of private property to a private entity for public rights of way and public utilities. Some states included other exceptions. For example, Alabama, Kansas, and Nebraska allow the use of eminent domain to clear a defective title under certain circumstances. Twenty-four of the 29 states changed their eminent domain procedures or added new requirements (see also fig. 3). Some states placed the burden of proof on the condemning authority to show that the use is public, the taking is necessary to remove blight, or both. For example, Colorado law states that the condemning authorities must prove by a preponderance of evidence that the eminent domain taking is for a public use. Furthermore, Colorado law sets a higher standard if the purpose is to eliminate blight— requiring condemning authorities to show by clear and convincing evidence that the taking is necessary for the elimination of blight. In addition, some of these states require condemning authorities to provide improved or additional public notice and hearings prior to condemning a property. Utah law requires that written notice be provided to the property owner of each public meeting at which a vote on the proposed taking is expected to occur and that the property owner must be given an opportunity to be heard on the proposed taking. West Virginia redefined the requirement for public notice to require a certified letter be sent to the property owner informing the owner about the public hearing and the right to an inspection to determine if the property is blighted. Some states also passed changes requiring condemning authorities to negotiate in good faith and increase the level of compensation to be paid to owners prior to invoking eminent domain. For example, in Missouri condemning authorities are required to establish requirements for the amount of compensation, which may be more than the fair market value. Missouri law also requires condemning authorities to pay, in addition to the fair market value, a “heritage” value for certain property owned by the same family for more than 50 years, which is equal to 50 percent of the fair market value of the property. Other procedural changes enacted by some of the states include providing the former owner of a condemned property the opportunity to purchase the property if it was not used within certain period of time or for the stated purpose and requiring the use of eminent domain to be approved by a governing body. Twenty-one of the 29 states defined or redefined key terms related to the use of eminent domain, including blight or blighted property, public use, and economic development (see also fig. 3). In particular, some states redefined blight or blighted property to include several explicit factors, generally emphasizing factors that are detrimental to public health and safety and removing aesthetic factors, such as irregular lot size. For example, California’s statutes require that for an area to be qualified for redevelopment it must be predominantly urbanized with a combination of physical and economic conditions of blight so prevalent and substantial that they can cause a serious physical and economic burden that cannot be reversed or alleviated by private enterprise or governmental action alone, or in combination with each other, without redevelopment powers and financing mechanisms. Prior California law would have allowed, as an exception to its general rule, that property subdivided into parcels with irregular shapes and inadequate sizes for proper development could also to be considered as qualifying an areas as blighted for redevelopment purposes. California amended its definition to remove this exception. In addition, some states redefined public use to include the possession, occupation, or use of the public or government entity, public utilities, roads, and the addressing of blight conditions. For instance, Iowa defined public use to include acquisition by a public or private utility, common carrier, or airport or airport system necessary to its function. Indiana included highways, bridges, airports, ports, certified technology parks, and public utilities as public uses. Finally, some states also established that economic development—which was defined by those states to include activities to increase tax revenue, the tax base, employment, or general economic health—does not constitute public use or purpose. At least six state legislatures approved constitutional amendments on restricting current eminent domain laws, which were placed on the ballot for voter consideration. For example, the Louisiana legislature approved two proposed constitutional amendments that were passed on September 30, 2006, by the voters in that state. These two amendments, among other things, (1) prohibit the taking of private property for use by or transfer to a private person; (2) limit public purposes to a list of factors, which includes such purposes as the removal of a threat to public health and safety; (3) exclude economic development, enhancement of tax revenue, and incidental benefits to the public from being considered in the determination of a public purpose; and (4) provide an option for the former owner to purchase condemned property or a portion of it should the property go unused by the authority that originally acquired the property. In addition, citizen-initiated proposals to amend the state constitution obtained the requisite number of signatures and were placed on a ballot in California, Nevada, and North Dakota. For example, the Nevada Property Owners Bill of Rights initiative to amend the state constitution in regards to eminent domain qualified for the Nevada 2006 general election ballot. The amendment would, among other things, establish just compensation as the amount necessary to place owners in the same position monetarily as if property had not been taken and prohibit the direct or indirect transfer of property from one private party to another. Several states and state associations also commissioned studies to determine if any changes were needed to their eminent domain laws. For example, in November 2005, the president of the New York State Bar Association appointed a special task force on eminent domain to provide legal analysis and recommendations about appropriate legislative and regulatory considerations in the practice of eminent domain law in the aftermath of the Kelo decision. According to a report issued by the task force, little state-specific research and data exist to accurately assess both the need for, and the impact of, changes to the state’s eminent domain laws. The task force suggested that the state legislature begin the collection and analysis of such data before deciding on appropriate substantive modifications to the law. For example, the report lists several questions that could be answered through empirical research, including how often condemnation proceedings are instituted and how many times eminent domain is used for economic development. Consequently, the task force recommended that a Temporary State Commission on Eminent Domain be established to further study the use of eminent domain in New York. In June 2005, the Governor of Missouri established by executive order a task force to study the use of eminent domain, including when the property being acquired by eminent domain would not be directly owned or primarily used by the general public. The task force recommended three categories of actions: redefining the scope of eminent domain, improving the procedures and process required for exercising eminent domain, and providing penalties for condemning authorities that abuse the eminent domain process. As a result, the state enacted changes to its eminent domain laws in July 2006. The governor of New Mexico also issued an executive order in which he stated that the most effective method of examining Kelo’s impact on the state’s eminent domain laws and practices was by convening a task force of the state’s eminent domain experts to determine what steps should be taken to ensure that condemnation would be used responsibly. Therefore, he appointed a state commission to make recommendations on eminent domain reform. Finally, in November 2005, Ohio enacted legislation that created a task force to study the use and application of eminent domain in the state and how the Kelo decision affects state law governing the use of eminent domain. On August 1, 2006, the task force issued its report, which, among other things, recommended that the state retain the use of eminent domain as a tool for the elimination of blight, even if the property that is taken is converted to another private use; rewrite and tighten the definition of blight; and require that a majority of the properties in an area be blighted to designate it as such. The report also recommended (1) prohibiting eminent domain takings solely for the purpose of generating added tax revenue, (2) prohibiting declaring blight solely on the basis of additional revenue that could be generated, and (3) compensating the property owner for actual moving and relocation expenses, and, when appropriate, loss of business, goodwill, and attorney’s fees. An inherent right of sovereignty, eminent domain is a government’s power to take private property for a public use while fairly compensating the property owner. Despite its fundamental significance, little is known about the practice or extent of the use of eminent domain in the United States. The matter of eminent domain remains largely at the level of state and local governments that, in turn, delegate this power to their agencies or designated authorities. Since multiple authorities have the power to take private property within the same jurisdiction without any centralized tracking of eminent domain use, data such as the purpose for which eminent domain is used or the number of times eminent domain is used in a given locality are not readily available. The testimonial evidence we obtained from state and local authorities on the purposes for which eminent domain can be and was used generally pointed to long-established uses, such as taking land for infrastructure, particularly transportation- related projects; uses that addressed economic and social conditions, such as blight; relatively more recent uses such as environmental remediation; and initiatives aimed at promoting economic activity or community redevelopment. Recently, popular attention has concentrated on cases where the condemned land was ultimately used for economic development projects and appeared to benefit private entities. In the absence of statewide or nationwide data, it is difficult to quantify the usage of eminent domain; for example, there are no data on how frequently private-to-public or private-to-private transfer of property occurs or with what frequency eminent domain has been used by state and local governments, their agencies, or designated authorities. Concerns and debates on the use of eminent domain for economic development purposes, as well as the Kelo decision, have played a role in recent state legislative activity. Many state legislatures have acted to prohibit certain eminent domain practices, such as preventing property from being transferred from one private party to another for specific purposes—for purely economic development projects, as an example. Many states changed their eminent domain laws to permit a private-to- private transfer only if it meets certain conditions, such as the property having been determined to be blighted. Since these recent modifications to state laws have not been tested and historical data on eminent domain use are not available for comparison purposes, how these laws may affect property rights or state and local government use of eminent domain is unclear. Our discussions with authorities and property rights groups suggest that the impact of eminent domain often depends on the nature of the project, the parties involved, costs related to legal proceedings and relocation, and the administration of procedural requirements. On the one hand, local and state government officials generally have described eminent domain as one of several tools necessary for land acquisition and explained that most of the properties assembled for projects are obtained through negotiated settlements with owners. Representatives from the authorities in cities we visited provided examples of how projects where land was assembled using eminent domain have yielded benefits to the public, including increased housing stock and new commercial centers that offer local job opportunities. On the other hand, property rights advocates described the high costs property owners faced in challenging property valuations and the intangible effects on neighborhoods when residents are involuntarily dispersed. Although we observed some of the benefits derived from the projects we visited and heard of instances in which property owners reportedly were misled by authorities about condemnation proceedings or appraisals, a lack of measures and aggregated data do not allow us to make any comment on the overall impact eminent domain has had on property owners and communities. Regardless of their stance in the debate on eminent domain, government officials and property rights groups we interviewed identified a few concerns related to the procedures on invoking eminent domain, including the adequacy of compensation amounts and the timeliness of notification about public hearings. First, many government officials we spoke with said that certain benefits provided under the URA, such as actual moving costs and expenses in finding a replacement site for businesses, to displaced individuals and businesses may not offer adequate compensation under certain circumstances. For example, the URA places a $10,000 cap— an amount left unchanged since 1987—on reestablishment expenses for businesses that have to relocate. A 2002 FHWA study confirmed the inadequacy of the reestablishment payments. Second, property owners and organizations advocating for property rights repeatedly told us that property owners may have limited opportunity or are unaware of the need to attend public hearings at a project’s planning stage to voice their opinions about the proposed acquisition of their property. For example, some property owners and property rights groups explained that property owners may not receive public notice on a timely basis or that they may lack sufficient understanding of the legal process to be fully engaged in the hearing discussions. To address the latter issue, at least one state has created an ombudsman office to provide information and assistance for property owners and others involved in the acquisition of property for public projects. Nevertheless, these two concerns may deserve continued attention given that just compensation and public hearings are two important safeguards designed to protect property owners. We provided a draft of this report to the Departments of Justice, Transportation and Housing and Urban Development for their review. The Department of Transportation provided technical comments, which have been incorporated where appropriate. The Departments of Justice and Housing and Urban Development did not have any comments. We will send copies of this report to the Chairman and Ranking Member, Subcommittee on Transportation, Treasury, the Judiciary, Housing and Urban Development, and Related Agencies, Senate Committee on Appropriations; and the Chairman and Ranking Member, Subcommittee on Transportation, Treasury, Housing and Urban Development, the Judiciary, District of Columbia, and Independent Agencies, House Committee on Appropriations. We also will send copies to the Secretary of Housing and Urban Development, Secretary of Transportation, and the Attorney General. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Congress, in the Transportation, Treasury, Housing and Urban Development, the Judiciary, the District of Columbia, and Independent Agencies Appropriations Act, 2006, mandated that we conduct a nationwide study on the use of eminent domain. Our objectives were to provide information on (1) the purposes and extent for which eminent domain can be and has been used; (2) the process states and select localities across the country use to acquire land, including by eminent domain; (3) how the use of eminent domain has affected individuals and communities in select localities; and (4) the changes state legislatures made to laws governing the use of eminent domain from June 2005 through July 2006. To report on the purposes for which eminent domain has been and can be used, including the extent of its use, we reviewed pertinent sections of each state’s constitution to determine whether there is a general limitation on use of eminent domain in the state for public use only. We also reviewed specific blight definitions for 10 states we selected: California, Colorado, Florida, Illinois, Massachusetts, Missouri, New York, Texas, Virginia, and Washington. In addition, we interviewed multiple national associations of local and state government officials and planning professionals, national public interest groups, national property rights groups, and the National Academy of Public Administration to gain their perspective on past, current, and potential uses of eminent domain. We also interviewed federal officials from the Departments of Transportation, Housing and Urban Development, and Justice, as well as the Environmental Protection Agency, to learn about how federal programs or funding may be involved in eminent domain proceedings that state and local governments undertake. Finally, we requested information from state legislative research offices on information related to which authorities within the selected state have the authority to use eminent domain. To learn about specific instances in which eminent domain was used, we collected project information from multiple sources. We solicited project information from 10 different national organizations that had either testified before Congress on eminent domain matters or who met the criteria laid out in our mandate for types of organizations we were expected to consult during our study. We provided these 10 organizations with a formal request for project information. Our request included basic criteria that each submitted project should meet. The criteria were: eminent domain having been used (rather than only threatened), the project being substantially completed by December 2004, the project not being primarily related to transportation, the project being located within the 10 states we selected, and preferably, that the project be funded with some federal financial assistance. In addition, we explained to the organizations that we would accept projects that did not involve federal funds, as long as the other four criteria were met. We requested that each organization provide at least 5 different projects within each of the 10 states we selected for review. In total, the 10 national organizations provided 134 projects. Based on the criteria outlined above, the desire to have at least one project in each of the selected states, and to provide a diversity of examples, we selected a total of 40 projects from the 134 for further review. To obtain further information on the projects, we made at least three attempts to contact each local authority responsible for completing or overseeing the project. We completed contact with 36 of the 40 local authorities and learned that 9 of the projects did not meet our criteria because eminent domain was not used, the project was not yet complete, or the project was located in a state not included in our 10 selected states. Of the 27 remaining projects for which we were able to confirm basic project information, such as the use of eminent domain or year of completion, we sent a detailed e-mail request for project information to individuals we contacted from the local authority. Based on our conversations with the authorities responsible for the 27 projects, we scaled back the amount of information we were requesting and extended deadlines for providing the requested information. We received detailed project information for only 11 out of the 27 projects. In addition to efforts described above, we interviewed officials from state departments of transportation of the 10 selected states. We decided to speak with these officials because interviews with national organizations and federal agencies and our literature research indicated that transportation-related projects often rely on eminent domain to assemble land. From these officials we also solicited detailed project information on transportation-related projects, mostly dealing with road improvements, construction, or expansion, in which eminent domain was used. From the state departments of transportation we received 6 projects in which eminent domain was used that also met the same criteria used to select projects provided by the 10 national organizations. We also contacted several state agencies responsible for brownfield remediation, but were unable to receive any additional projects from these agencies. To describe how state agencies and select localities invoke eminent domain, we relied on our interviews with the 10 state departments mentioned above. We discussed the state departments of transportation’s authority to invoke eminent domain, the planning phases they undertake, and their land acquisition and relocation processes. In addition, we interviewed officials from the 5 cities we visited: Baltimore, Maryland; Chicago, Illinois; Denver, Colorado; Los Angeles, California; and New York, New York. During our site visits, we learned about specific projects in which eminent domain was used according to the city officials, 14 of which we toured. City officials also provided written documentation related to the selected projects that included detailed project plans, court documents, applicable state statutes and municipal codes, and relocation services provided to property owners and residents displaced due to eminent domain proceedings. Finally, we reviewed pertinent sections of each state’s constitution to determine whether there is a requirement for the payment of fair or just compensation paid to the owner whose property is taken by eminent domain. To convey how eminent domain has affected property owners and communities in select localities, we interviewed national and local organizations that advocate for property rights, in addition to property owners who claimed to have been involved in eminent domain proceedings. In accordance with long-standing GAO policy, we excluded eminent domain takings currently under litigation and, therefore, only focused on past instances involving eminent domain use. We discussed how eminent domain impacts property owners, businesses, and residents with affected owners and organizations that advocate for property rights. To report on the changes state legislatures had made to laws governing the use of eminent domain, we reviewed legal databases and various Web- published information, such as the text or status of a bill, from state legislatures from all 50 states to determine in which states changes occurred. We then analyzed the state laws identified and grouped states based on our interpretation of those laws into three broad categories in order to more easily describe which states enacted certain types of provisions to their eminent domain laws. The three categories were: (1) states that placed restrictions on the use of eminent domain, such as prohibiting its use to increase property tax revenues, transfer condemned property to a private entity, or to assemble land for projects that are solely for economic development; (2) states that established additional procedural requirements, such as providing further public notice prior to condemnation; and (3) states that modified definitions for terms related to eminent domain use, such as blight or blighted property, public use, and economic development. We only reviewed those changes to state law that state legislatures passed and governors signed into law between June 23, 2005, and July 31, 2006. Related to other state and local laws referenced in the report, we did not undertake any independent legal review of them or how those laws affect the use of eminent domain. To identify state requirements regarding eminent domain procedures, we relied on the state and local officials we interviewed and the information they provided. In the time frame allotted for our study, we could not review all pertinent state requirements regarding eminent domain authorities and procedures. In addition to the work outlined above, we conducted an extensive literature search to assist us in meeting our objectives. Primarily, we searched for other reports, studies, and academic papers that may have tallied or assembled data sets on eminent domain use, or developed measures to assess the impact of eminent domain. To refer to or analyze data collected by others, we had to satisfy our criteria for identifying reliable and valid data, which include testing the methods and procedures others used in collecting the data. Although we identified some studies that were useful in providing us context and outlining barriers to collecting and analyzing data related to eminent domain use, we did not find any with data that met our criteria. Our literature review did provide many articles, reports, and reviews of matters related to eminent domain. However, none provided an analysis of detailed data related to eminent domain use. We conducted our work in accordance with generally accepted government auditing standards from January through November 2006 in Baltimore, Maryland; Chicago, Illinois; Denver, Colorado; Los Angeles, California; New York, New York; and Washington, D.C. In addition to the individual named above, Karen Tremba (Assistant Director), Alexander Galuten, Alison Martin, Marc Molino, Josephine Perez, Linda Rego, Barbara Roesmann, Julie Trinder, Mijo Vodopic, Kristen Waters, and Nicolas Zitelli made key contributions to this report. | In the Transportation, Treasury, Housing and Urban Development, the Judiciary, the District of Columbia, and Independent Agencies Appropriations Act, 2006, Congress mandated that GAO conduct a nationwide study on the use of eminent domain by state and local governments. This report provides information on (1) the purposes for and extent to which eminent domain can be and has been used; (2) the process states and select localities across the country use to acquire land, including by eminent domain; (3) how the use of eminent domain has affected individuals and communities in select localities; and (4) the changes state legislatures made to laws governing the use of eminent domain from June 2005 through July 2006. To address these objectives, GAO reviewed relevant provisions in federal, state, and local laws; conducted site visits to various redevelopment projects where eminent domain was used; and interviewed multiple national associations of local and state government officials and planning professionals, national public interest groups, and national property rights groups to gain their perspectives on the use of eminent domain and its effect on communities and property owners. The Department of Transportation provided technical comments on a draft of this report, which have been incorporated where appropriate. Officials from national organizations and state and local governments cited various purposes for which eminent domain can be or has been used, including the building or expansion of transportation-related projects; the elimination and prevention of conditions that are detrimental to the physical, social, and economic well-being of an area; remediation of environmental contamination; and economic development. However, no centralized or aggregate national or state data exist on the use of eminent domain, thereby precluding GAO from any national or statewide assessments of, among other things, how frequently eminent domain is used for private-to-public or private-to-private transfer of property and purposes of these transfers. Multiple laws promulgated from federal, state, and local governments set forth how authorities can acquire land--including by eminent domain--and how compensation for property owners is determined. Some believe payment limits are too low. The initial step in a project that involves land acquisition is the public review and approval by a public body of a project plan, which is followed by a land valuation process during which title studies and appraisals are completed. During the land acquisition stage, authorities often make a formal offer to the owner and attempt to negotiate the purchase of the property. If the authority cannot locate the owner or the parties cannot agree to a price, among other circumstances, the authorities then begin the formal legal proceedings to acquire the property by eminent domain. Finally, once the property is acquired, authorities may provide relocation assistance that may include monetary payments to cover moving expenses. Redevelopment projects for which eminent domain is used affect individuals and communities in a range of ways that cannot be quantified due to a lack of measures and aggregate data. According to authorities, areas selected for redevelopment could have been vacant and abandoned land or those that included residents and operating businesses. Local officials both described and showed us community benefits resulting from redevelopment projects, including additional employment opportunities and housing in an area. Also, property rights groups told us some of the negative effects of eminent domain, such as the dispersal of long-standing communities. Finally, these groups expressed concerns about how authorities implement procedures for using eminent domain, particularly the provision of public notice to owners about the risk of condemnation, and the process for designating an area as blighted. From June 23, 2005, through July 31, 2006, 29 states enacted at least one of the following three general types of changes to their eminent domain laws: (1) restrictions on the use of eminent domain under certain circumstances, (2) additional procedural requirements, and (3) changes that defined or redefined key terms related to eminent domain including public use. |
On June 21, 1995, we testified that the District’s financial records were inadequate and that the City did not have the most basic financial data, including the status of its expenditures against budgeted amounts, the amount of bills owed, or the balance of cash available. As a result, District managers did not have fundamental financial information necessary to help control spending and costs and to estimate budget and cash needs. Given the long-standing problems with the District’s financial management, we recommended that the Authority study the accounting and financial management information needs of the District. Subsequently, the Authority and the District requested and the Congress approved funds to assess the need for implementing a new financial management system. The 1996 District of Columbia Appropriations Act (Public Law 104-134) authorized the District of Columbia government to spend $28 million of its revenues to implement a replacement for its existing financial management system. Of the $28 million, $2 million was provided for a needs analysis and assessment of the existing financial management environment. Public Law104-134 made the remaining $26 million available to procure the necessary hardware; install new software; and perform system conversion, testing, and training after the needs analysis and the assessment were received by the Congress. The District is now in its fourth year of implementing its new financial management system. On September 4, 1997, the Authority awarded a 1- year contract with four option years potentially totaling $21 million to design and install a state-of-the-art financial management system for the District. The District began working with the contractor in September 1997 and piloted the new system at five agencies beginning February 1, 1998, with the goal of District-wide implementation of SOAR on October 1, 1998. SOAR consists of commercial, off-the-shelf applications used by state and local governments. SOAR was expected to strengthen control over appropriations and spending plans; enhance tracking of grants and projects; automate and streamline the financial management process; record obligations as incurred; make and track payments and disbursements; monitor performance measures by program and organization; prepare timely, accurate, and reliable financial reports; expedite the month-end closing process; and provide the ability to input and control data on-line. SOAR is the District’s central general ledger system and includes the five following components: The Relational Standard Accounting and Reporting System (R*STARS), the core accounting module, provides general ledger capabilities as well as budgetary control, cash management, expenditures/payables, revenue/receivables, and budget execution functions. The Advanced Purchasing and Inventory Control System (ADPICS) is integrated with R*STARS and provides a comprehensive system of materials management encompassing requisition/purchase transactions, accounts payable, and inventory processing. The Performance Budgeting module supports the development of operating and capital budgets and provides information on program costs and performance measures. The Fixed Assets System supports accounting, management, and control over capital and controllable assets. The Executive Information System (EIS) is a high-level analysis tool for program and financial management that enables data modeling, creates analyses for “what if” scenarios, and offers the flexibility to generate ad hoc reports. In addition to SOAR, there are other critical feeder systems that make up the District’s financial management system. Figure 1 shows the various SOAR components within the District’s overall financial management system. To fulfill our objectives, we analyzed the District’s financial management systems project and program management plans, work breakdown implementation schedules, project cost tracking documents, contract records, meeting minutes, and briefing reports; reviewed the District’s budget formulation process, budget process manual, and the Government Finance Officers Association’s report on the District’s budget office organizational structure; reviewed reports issued by the Inspector General of the District; and reviewed audit reports and related documents describing financial management system implementation activities and weaknesses identified during financial statement audits of the District covering fiscal years 1995 through 2000. We met with the following District personnel: Chief Financial Officer; Deputy CFO for the Office of Budget and Planning; Deputy CFO for the Office of Financial Operations and Systems; SOAR Program Management Office officials; Director, Mission Support, OCFO; Director, Information Systems Administration, Office of Tax and Revenue; Office of the Inspector General officials; Representatives from the Authority; agency CFOs and financial staff; and Office of Procurement staff. We also met with contractors responsible for implementing the new system. Our work was performed from September 2000 through February 2001 in accordance with generally accepted government auditing standards. We requested and obtained comments on a draft of this report from the Mayor of the District of Columbia and the District’s Chief Financial Officer. These comments are discussed in a later section of this report and reprinted in appendix II. Financial management requires that financial and program managers be accountable for financial results of actions taken, control over the government’s financial resources, and protection of assets. The District’s Office of the Chief Financial Officer (OCFO) has responsibility for effective financial management in the District. In order to meet the above requirements, financial management systems must be in place to process and record financial events effectively and efficiently and to provide complete, timely, reliable, and consistent information for decisionmakers and the public. Over the past several years, the District has undertaken a number of initiatives designed to improve its financial management environment. However, a number of key initiatives have still not been completed or have been placed on hold, and some are still being revised. For example, the District has yet to complete implementation of SOAR, the personnel and payroll system, the procurement system, and the tax system. The District is also in the process of reengineering its budget process before deciding whether to implement a new budget process and fully integrate the budget data within SOAR. In addition, the District has not yet implemented the fixed asset module of SOAR. The original implementation schedule indicated that SOAR would be fully implemented by September 30, 1998, and the external feeder system and the related interfaces were scheduled for completion by April 1999. The SOAR implementation, however, has been marked by delays. As we noted in our April 1998 report, we found severe weaknesses in critical implementation processes, including lack of requirements development and project management. For example, the District did not have an organizational policy for establishing and managing software-related requirements and no clear assignment of responsibility for requirements development and management. As a result, the District did not have assurance that the new financial management system or any other software acquisition project undertaken would be conducted in a disciplined manner. Further, studies have shown that problems associated with requirements management are key factors in software projects that do not meet their cost, schedule, and performance goals and that these systems can cost many times more than expected if done improperly. The Performance Budgeting module included in SOAR was expected to facilitate the management of the entire budget formulation process, from budget submission to final review and approval. One of the primary objectives of the Performance Budgeting module is the automation of the annual budget development process to replace the present cumbersome, manual process. The module was also expected to provide financial reporting at various program levels as needed and contains a performance measures feature to capture information for performance measures. District officials anticipated that the performance measures feature in the module would be able to provide comparative and cost information for all levels of programs throughout the District. Comparative and accurate cost information would enable stakeholders to make more informed decisions, eventually providing better service to the citizens of the District. District officials anticipated that performance measures, once they were developed for programs in the District, could be maintained and tracked easily and accurately in the module. At the time of our review, the District had decided to suspend the implementation of the Performance Budgeting module. District officials told us that in addition to the need for reengineering the budget process, the current design of the Performance Budgeting system does not fully support the District’s information needs. Specifically, the Performance Budgeting module is not a fully integrated product of the core accounting system and will require multiple and frequent updates to the two systems in order to maintain a fully integrated system as updates or upgrades occur to one or both systems; the current, nonintegrated design of the Performance Budgeting module limits both agency budget staff and the Office of Budget and Planning (OBP) from fully using information from the core accounting system; and inadequate reporting tools within the Performance Budgeting module prevent OBP from generating essential budget reports on a timely basis to the stakeholders. In addition to the above financial system issues, District officials stated that use of the Performance Budgeting module at this time presents a number of other technical support system challenges and the current condition of the District’s financial support systems needs to be stabilized before implementation. For example, The lack of a stable unified Wide Area Network (WAN) precludes District agencies from accessing the system and developing their fiscal year 2002 budgets. However, according to the SOAR Project Plan, the necessary infrastructure analysis was completed on January 6, 1998. Current client server resources prevent agency budget staff from concurrently accessing the Performance Budgeting module during peak budget season, which can result in users being locked out or knocked off the system. In addition to the above factors, further review by OBP has resulted in a decision by the District to reengineer the budget formulation process and develop a requirements definition for the new process and then select a new software solution to deliver the results. The Fixed Assets module was intended to track the acquisition, transfer, disposition, and maintenance of the District’s capitalized assets (such as personal property, equipment, and buildings) and support accounting, management, and control over these assets, which totaled a reported $3.1 billion in fiscal year 2000. The District’s implementation schedule for this module has continually been revised. For example, originally scheduled to be implemented in February 1999, the Fixed Assets module is now planned to be implemented by the end of fiscal year 2001—over 2 years after the initially planned implementation. According to the SOAR Program Management Office (PMO), the delay was the result of multiple competing priorities such as the Year 2000 conversion and the production of the Comprehensive Annual Financial Report (CAFR) combined with the problem of limited personnel resources. Currently, fixed assets are recorded by each agency using a variety of methods, including manually updated ledger books and off-line automated tools. Such methods, however, are error-prone and could lead to incorrect recording and reporting of assets. Once the Fixed Assets module is implemented, according to District officials, all District agencies will use this same tool to account for assets. However, the SOAR PMO does not have a documented comprehensive plan for managing the implementation of the Fixed Assets module. Although the financial management system was scheduled to become fully integrated in April 1999, significant external feeder systems, including personnel and payroll functions, the procurement system, and the tax system, are not fully integrated with SOAR as originally planned because the feeder systems have not been completed. In addition to the feeder systems not being completed, current monitoring of interface development is not documented. For example, we asked the SOAR PMO for an update on the status of the interface development and we were given the most recent interface status report, which was dated September 14, 1998. The SOAR PMO was unable to provide an explanation as to why the status report was not being updated more frequently even though work was ongoing. In our July 1997 report, we raised concerns about the District’s failure to focus more broadly on its financial requirements, such as those stemming from the need to integrate SOAR with feeder systems. We noted that the District had not defined how the interfaces would work or what data would be provided for each feeder system. In addition, as we reported in October 1997, the District’s time frames for implementing its systems seemed to be ambitious in light of the complex nature of the District’s financial management structure and the lack of identified and confirmed requirements for several key systems, such as the feeder systems. The District acquired and developed a new personnel and payroll system in order to improve the quality of its business processes and to replace an aging legacy system. As we noted in a 1995 testimony, personnel information on the District’s 40,000 employees has long been error-prone and inconsistent. Beginning in 1991, the District created an action plan to acquire an automated human resources management information system, called the Comprehensive Automated Personnel and Payroll System (CAPPS). CAPPS was estimated to cost about $13 million to develop and was expected to be deployed by December 1999. The District had anticipated that CAPPS would provide more robust human resources capability than the prior legacy system, such as providing on-line funding data at the agency level, budgetary and spending controls at the position level, and accurate accounting of expenses, such as overtime. However, as we reported in December 1999, the District did not effectively plan for CAPPS. We noted that the District did not develop a project management plan and a risk management plan; obtain agreement from the acquisition team, system users, and the contractor on detailed requirements for CAPPS; or establish a configuration control process to control the changes that were made to data tables connected to the software package that the District acquired for CAPPS. By not implementing these critical management processes, the District lacked the means to establish realistic time frames for CAPPS, track development along those time frames, and ensure that changes being made to CAPPS were consistent and in line with business requirements. In fact, since beginning the CAPPS initiative in 1991, the District has had to continually revise its CAPPS implementation deadline. As a result of these delays, some District agencies are using the CAPPS system, while others continue to use the old personnel and payroll system, the Unified Personnel and Payroll System (UPPS), until a plan for a new payroll processing system can be developed. DC Public Schools, the Fire Department, and a few smaller agencies are using CAPPS to process payroll, while the remaining District agencies process their payroll through UPPS. Relying on dual systems in this manner leads to lack of standardization and creates unnecessary effort and inefficiencies. In addition, CAPPS is not electronically integrated with SOAR, and both CAPPS and UPPS data must undergo a conversion process before interfacing with SOAR, creating further inefficiencies in processing and reporting payroll-related costs. Furthermore, the District is in the process of determining whether it will continue with CAPPS or whether an entirely new system is needed. In an August 11, 2000, letter to the Mayor, the District CFO stated that “the new CAPPS system, for a number of reasons, is compromised beyond repair. While it continues to pay people accurately, it has been customized to the point that its basic architecture has been destroyed. Underlying calculations necessary to make retirement computations and W-2s are likely fatally compromised.” This combination of weaknesses and uncertainties surrounding CAPPS further call into question the District’s ability to resolve implementation issues and its ability to pay its employees accurately and on time and account for their retirement and benefits. According to a District official, the Integrated Tax System (ITS) is intended to be a complete reengineering of the Office of Tax and Revenue’s (OTR) business process, at an estimated cost of about $63 million. It came about because of serious concerns related to business processes, collection of delinquent accounts, tax compliance/discovery, data purification, work flow management, and the integration of revenue management with other key governmental functions (for example, unemployment compensation, business registration and regulation, and child support enforcement). The old system consisted of a stand-alone, nonintegrated system for each major tax category—business, real property, and individual. For example, if a taxpayer was due a refund from an individual tax return but owed property taxes, there was no linkage under the old structure that would allow these two tax systems to interact to offset one another. District officials told us that the Integrated Tax System will allow the District to integrate all tax types under one system. On November 13, 2000, the District’s Business Tax module became operational and interfaced with SOAR. The real property and individual tax modules are expected to be completed and integrated with SOAR in January 2002. Until the real property and individual tax modules are implemented, the District will continue the cumbersome practice of manually entering data for property and individual taxes into SOAR. In July 2000, the District’s Inspector General reported that the District was using two distinctly different procurement systems. The Office of Contracts and Procurement (OCP) purchased PRISM/OCP Express as its procurement package, while the CFO uses ADPICS for procurement transactions. As a result, various offices rely on different systems to process procurements. OCP has spent at least $14 million since it began the process of implementing a new procurement system 13 years ago. A July 2000 DC Office of Inspector General report stated that PRISM/OCP Express and SOAR had interface problems and that procurement data were being maintained in both systems. Furthermore, reports generated with procurement data must be developed with the coordination of the responsible agencies and after reconciliation of data from both systems. According to a District official, this situation makes it difficult for them to track procurements and payments, and the use of both systems has produced inefficiencies, duplication, and waste within the District. An OCP official told us that they recently hired a contractor to review OCP Express capabilities and determine what modifications, if any, could be made to enhance its functionality. According to District officials, until the District completes its assessment of the procurement system, it plans to continue using ADPICS to process procurement transactions through SOAR as ADPICS is the only available mechanism for entering procurement transactions into SOAR. We also reported in our January 31, 2001, report that serious and pervasive computer security weaknesses place the DC Highway Trust Fund and other District financial, payroll, personnel, and tax information at risk of inadvertent or deliberate misuse, fraudulent use, and unauthorized alteration or destruction without detection. A primary reason for the District’s information system control problems was that it did not have a comprehensive computer security planning and management program. An effective program includes guidance and procedures for assessing risks, establishing appropriate policies and related controls, raising awareness of prevailing risks and mitigating controls, and evaluating the effectiveness of established controls. Such a program, if implemented effectively, would provide the District with a solid foundation for resolving existing computer security problems and managing its information security risks. District management stated that it has recognized the seriousness of the weaknesses we identified and expressed its commitment to improving information system controls. As discussed earlier, the District has placed the implementation of the Performance Budgeting system module on hold. A District official told us that it needs to conduct business process reengineering of the budget process before making a decision on the appropriate solution for its budget and related reporting needs. In the meantime, the District continues to rely on a manual, cumbersome process each year to develop the budget. District budget officials have told us that a number of improvements have been made to the budget process for fiscal year 2002 that they believe will address some of the problems they faced in developing the fiscal year 2001 budget. Table 1 compares the fiscal year 2001 budget process and results to the planned fiscal year 2002 budget process, modifications, and expected results, as provided by the District CFO. However, the District will enter its budget formulation process for fiscal year 2002 without an implemented financial system for gathering and formulating its budget data. Furthermore, the District will not have adequate program-level cost and budget results data for fiscal year 2001 for use in formulating its fiscal year 2002 budget. Because the District was in the early stages of budget formulation, we were not able to assess whether these improvements will achieve their intended results. Further, the District was unable to provide us with any documentation to support that it had undertaken a structured and disciplined approach to implementing these actions. In order to address its budget processes and systems, the District contracted with the Government Finance Officers Association (GFOA), a leader in state and local budgeting and finance, to help evaluate how well OBP was structured to carry out its budget and financial functions. The report, which GFOA issued on November 2, 2000, focused on an organizational review of the budget office internally and as it related to other District agencies. GFOA found that OBP faced organizational and personnel management issues. For example, the study cited the following issues: organizationally, there is little communication or coordination between OBP divisions during major budget cycle periods; OBP lacks clear organizational and management policies regarding budget development and execution; staff need training on the current financial management system and the acquisition of analytical tools to perform financial analyses; and OBP staff have not had fiscal analyses sufficiently incorporated into their typical duties, such as expenditure and revenue analyses, cost-benefit analyses, and program outcome analyses. OBP officials stated that, as a result of the GFOA recommendations, they have made a number of staffing and organizational changes, which were consistent with many of the initiatives for change they had already started. OBP officials stated that they have taken the following actions: created the position of Chief Operating Officer to increase overall program efficiency and accountability in three programs: data management, organization management, and communications; realigned the Associate Deputy CFO (ADCFO) position to sharpen OBP’s quality of forecasting and long-term fiscal planning ability with ADCFO assuming responsibility for the functions of economic analysis, budget execution and reporting, and legislative affairs, which are three new branches; assigned the Economic Analysis Branch the lead on improving performance-based budgeting for the District; designated the Legislative Affairs Branch as the leading provider of legislative support to OCFO and other stakeholders, providing legislative, legal, and policy analysis to ensure that the goals of District stakeholders are achieved; gave the Organizational Management Branch responsibility for supervising and coordinating general office operations and building morale and reducing attrition through reform, orientation, recruitment, outreach efforts, training, and career development; and reorganized the Operating and Capital Budget divisions, which under the old structure functioned independently of each other with little or no collaboration or interaction, into a two-pronged division that strategically links operating and capital budget operations. According to District OBP officials, the above changes have improved budget forecasting activities as well as provided greater control over budget execution and fiscal oversight. We agree that OBP needed to address personnel and related organizational issues in order to better align its operations and facilitate and enhance its ability to carry out its budgetary and financial responsibilities. However, because these efforts were only in the early stages of implementation, we could not assess their impact on OBP’s operations or on its ability to successfully implement a budgeting system in the future. Until the District develops a disciplined and structured approach to its business process reengineering efforts for its budget process, it will continue to develop its budget using a process that is cumbersome and inefficient. Until a budget formulation and execution system is implemented and fully integrated with its financial systems, the District’s budget is not likely to reflect the cost of services at the program level because the District currently does not have a way of measuring those costs. As originally envisioned, SOAR was expected to provide general ledger, grants management, fixed assets management, budget execution, cash management, and budget formulation functionality. SOAR was expected to strengthen control over appropriations and spending, provide enhanced tracking of grants and projects, automate and streamline the financial management process, record obligations as incurred, track payments and disbursements, monitor performance measures by program and organization, prepare timely, accurate, and reliable financial reports, expedite the month-end closing process, and provide the ability to input and control data on-line. Overall, the District and its residents were expected to benefit from improved financial management and reporting of public services and resources. Our discussions with SOAR pilot agencies indicate that these expectations have not been realized. Five District agencies were used as pilot agencies during the systems implementation— the District of Columbia Public Schools (DCPS), Department of Public Works (DPW), Department of Human Services (DHS), Metropolitan Police Department (MPD), and Financial and Technical Services. The pilot program provided an opportunity for agencies with unique requirements to customize the implementation of SOAR in their agencies. The pilot program also provided a test and “exercise” of the implementation process as well as a test of the function of the new system. The pilot was designed to identify problems and develop solutions prior to full implementation. We contacted each pilot agency to obtain officials’ views on SOAR’s current operational status. In conjunction with our discussion with the pilot agencies, we reviewed the expectations communicated in a December 1997 presentation in which the OCFO provided a detailed list of the former financial management system’s deficiencies along with the anticipated capabilities of the new financial management system that were expected to remedy these deficiencies. The following are examples of current operational issues identified by officials at the pilot agencies where anticipated resolutions did not materialize. All five pilot agencies reported that they need more recent data to more fully use the SOAR EIS, a high-level management tool capable of generating ad hoc reports. Currently, data in EIS is updated weekly, even though the expectation for the new system was that users would have “real time, on-line information.” Four of the five pilot agencies indicated that the lack of full integration between the core accounting system and feeder systems is a problem. For example, two agencies said that they have to spend extra time reconciling payroll transactions between CAPPS and SOAR in order to ensure that the payroll data in both systems is accurate and complete. Four of the five pilot agencies indicated that the help desk facility needs improvement because it does not provide adequate assistance. Four of the five pilot agencies indicated that the SOAR training was not tailored to their specific needs. Three out of the five pilot agencies indicated that they need enhanced project costing capabilities. According to one agency official, SOAR does not provide cost information on specific programs or activities. This results in agencies having to maintain information outside the system in an attempt to track program costs. Our work as well as other studies have shown that problems associated with requirements management are key factors in software projects that do not meet their cost, schedule, and performance goals. By not clearly identifying and defining user requirements up front, the financial management system is currently not able to fulfill the financial management and reporting needs of its users. Training is a critical component of successful implementation of a new financial management system and can be accomplished through a formal program, on-the-job training, and the use of experts assigned to each agency. We previously reported that from January 1998 through April 1999, 42 percent of SOAR users did not attend scheduled training. In December 2000, the SOAR PMO identified a core training curriculum consisting of nine courses. According to the SOAR PMO, less than 50 percent of the SOAR user community had completed the new core training curriculum. The SOAR Program Director told us that conflicting priorities contributed to the low attendance rate, including the large amount of effort needed to complete the District’s annual financial statements and prepare for its annual audit. According to the same official, another factor was the lack of complete buy-in by the users of the new system. In addition, a District official stated that many reported transaction errors resulted from a lack of understanding of the impact of transactions and their effect on general ledger accounts, coupled with the learning curve experienced with the implementation of the new system. Furthermore, several pilot agency officials stated that the overall training was generic and not specific to the District’s needs. In our June 1999 report, we stated that the District planned to pilot a job certification program for employees in financial positions. Under this program, employees would be certified for financial positions based on training and testing. In November 2000, the SOAR PMO told us that they are still reviewing the details of implementing a certification program. The SOAR PMO also said that a comprehensive training plan for financial management personnel for fiscal year 2001 does not exist. The SOAR Steering Committee had determined that there was a need for more District user access to individuals with enhanced SOAR expertise. At the same time, the District is in the process of implementing a new program called the “Super Users” program. The goal of the program is to develop a team of “super users,” individuals with advanced SOAR skills, to serve as mentors and providers of on-the-job training to users. District officials said they had recently selected eight individuals for this program, and a recruitment effort is underway to identify eight more individuals for the program. The current system does not have the capability to capture the costs of specific District programs or activities. Project cost accounting is important in determining whether specific programs or activities are achieving their goals within budget. To compensate for the lack of a project cost accounting capability, agencies are capturing and maintaining information outside SOAR manually or by using other software applications. According to District officials, there are no plans to implement a separate module, such as activity-based costing. Comparative and accurate cost information would enable stakeholders to make more informed decisions, which would, in turn, provide better service to the citizens of the District through improved service delivery. Currently, various District activities, such as the cost of trash collections, motor vehicle inspection, clinical care, and street repairs, must be calculated manually, which is inefficient, time-consuming, and prone to error. District officials anticipated that the performance measures featured in the performance budget module would be able to provide comparative and cost information for all levels of programs throughout the District. However, it is unlikely that the District will have a solution for its program cost needs until after a budgeting system is implemented and integrated with SOAR. In September 1997, the District planned to spend $26 million (plus related costs for personnel and space) to implement SOAR. According to the Deputy CFO, as of March 8, 2001—almost 3-1/2 years later—the actual cost had climbed to about $41 million, an increase of about $15 million. According to the PMO, one of the reasons for the increased costs was the need to provide knowledge transfer (this refers to agency-specific implementation assistance, enhanced training, and help desk operations) at the major agencies and transition assistance, which were not originally included in the contract. According to another District official, the increase in implementation costs also resulted from the District initially not completely understanding user requirements. Consequently, the District found it necessary to implement additional requirements after the fact. For example, after the SOAR contract was awarded, the University of the District of Columbia (UDC) determined that SOAR could satisfy its requirements for higher education accounting and opted to install SOAR, rather than a separate new system, at a cost of $1 million. Also, according to a District official, the District underestimated the level of support and time required to implement SOAR. Accordingly, the District contracted for additional implementation support. Further, additional District users required access to EIS and thus required the purchase of additional EIS licenses, licenses which cost the District approximately $900,000. Acquiring an effective information system must begin with a clear definition of the organization’s mission and strategic goals. To assure a solid foundation for the District’s system, we offered a structured approach in our July 1997 report based on three building blocks especially important in the early stages of a project: concept of operations, requirements definition, and alternatives analysis. A concept of operations— the panorama of a system’s purpose and the environment in which it will function—is the basis on which specific requirements— functions the system must be able to perform—are developed. With a complete set of well-defined requirements based on a clear concept of operations, District leaders could make an accurate analysis of how well available alternatives would meet the needs of the government and its citizens. In April 1998, after the District had contracted for and begun development of its new system, a more detailed study led us again to the conclusion that the District’s process, while strong in some areas, was undisciplined and immature. Our conclusion was based on a well- recognized model of software process assessment, the Software Engineering Institute’s Software Acquisition Capability Maturity Model (SA-CMM). In our recommendations, we stressed the need for written policy and documentation, the development of a management plan, the assignment of responsibility for areas of planning and development, and requirements development in the key process areas of software acquisition planning, requirements development and management, project management, contract tracking and oversight, evaluation, and acquisition risk management. Although the processes spelled out in the SA-CMM model and our recommendations are detailed, rigorous, and time-consuming, they are, in the long run, cost-effective and vital elements of success. Unfortunately, in its efforts to meet an overly ambitious time schedule, the District has spent considerably more money than planned to acquire a system that—6 years after we began our review and started making recommendations— now serves as yet another cautionary example of the risks entities run when they choose to short-cut a disciplined approach to the planning, acquisition, and management of a financial management system. Another key component of successful financial management for the District is conducting a comprehensive assessment of its human capital needs for its financial management functions. In our Executive Guide: Creating Value Through World-class Financial Management, we outlined three critical elements for developing first-rate financial management staff teams. These elements were: (1) determining required skills and competencies, (2) measuring the gap between what the organization needs and what it has, and (3) developing strategies and detailed plans to address current or expected future deficiencies. We reported that having staff with appropriate skills is key to achieving financial management improvements, and managing an organization’s employees is essential to achieving results. Entities that focus on valuing employees and aligning their policies to support organizational performance goals start with a human capital assessment. The results of such an assessment could help to determine the resources needed to successfully implement financial management improvements. And, as we reported in September 2000,performing a self-assessment of human capital needs helps organization leaders understand the strengths and limitations of their human capital information systems. These data can help an organization develop a profile of its human capital. Further, because human capital information can spotlight areas of concern before they develop into crises, gathering these data is an indispensable part of effective risk management. Without a formal assessment of its requirements and needs, and a strategy for addressing them, the District’s efforts can become piecemeal, incomplete, and ineffective. The challenges the District already faces in implementing its financial management system could be exacerbated by its lack of a human capital assessment for its financial management functions. By not identifying staff with the requisite skills to implement such a system and by not identifying gaps in needed skills and filling them, the District has reduced its chances for successful implementation. In reports dating back to 1995, we have reported on the District’s financial situation and efforts to improve its financial management, including the implementation of SOAR. For example, in 1995, we recommended that the Mayor clean up existing data in the financial systems and place special emphasis on ensuring that basic accounting policies and procedures are followed and that the Authority study the accounting and financial management information needs of the District. In a July 9, 1997, status report, we noted that the District had not completed three key elements of its system acquisition: concept of operations, requirements definition, and alternatives analysis. We also noted that the time frames for completing several of these important tasks were unknown and that the District had adopted an overly ambitious schedule. On April 30, 1998, we reported that the District’s software acquisition processes, while having some strengths, were not mature when compared to standards established by the Software Engineering Institute. Weaknesses noted in the report included: (1) requirements definition problems; (2) project management and oversight weaknesses; (3) lack of an effective evaluation process; (4) lack of a formal risk management process; and (5) failure to meet some milestones. In that report, we made six recommendations for strengthening the processes that relate to the SOAR project and improving future software acquisition efforts. As discussed earlier, in our December 1999 report on the CAPPS system, we noted that by not implementing critical management processes, the District lacked the means to establish realistic time frames for CAPPS, track development along those time frames, and ensure that changes being made to CAPPS were consistent and in line with business requirements. We made six recommendations to the District that focused on the need to implement effective management controls and processes for maintaining, operating, and protecting CAPPS. Together, those reports established a framework for actions the District needed to take in order to (1) improve its financial management and (2) avoid costly failure when acquiring a new financial management system that would successfully meets its needs. In our reports, we indicated that major improvements in the District’s financial management and other management information can only be realized if they are part of an overall assessment of processes, people, and equipment. As of April 3, 2001, the District had implemented 3 of the 16 recommendations we made since 1995, and critical recommendations have yet to be fully addressed. The District had action in process on another 13 recommendations. Table 2 in appendix I provides the implementation status of recommendations made in our reports and testimony on SOAR implementation covering fiscal years 1996 through 2000. The District continues to face significant challenges in its efforts to put in place a financial management framework that ensures timely and reliable financial data on the cost of the District’s operations. In its efforts to meet an overly ambitious schedule, the District has spent considerably more money than planned to acquire a system that—6 years after we began our review and started making recommendations —now serves as yet another cautionary example of the risks involved in not following a disciplined approach to the planning, acquisition, and implementation of a financial management system. Almost 4 years after the District’s acquisition of its core financial management system, SOAR and related systems are in various stages of implementation and some elements have been put on hold. The current mix of components involves duplication of effort and requires cumbersome manual processing instead of automated interfaces. Staff members who use the system are inadequately trained. In its current state, the system is unable to produce relevant, useful, timely, and reliable information adequate to the needs of government officials for assessing the costs of programs, measuring program performance, and making well- informed decisions in forming the city’s budget and in providing city services. With a system that is still incomplete more than 2 years after the planned date for citywide implementation, the District has already spent over 50 percent more than originally planned for SOAR implementation. The project continues to experience implementation delays, and the final cost of the complete financial management system cannot yet be determined. Disciplined acquisition and implementation processes are designed to prevent the types of problems experienced by the District in its financial management systems implementation. The key to a disciplined system development effort is the use of disciplined processes in multiple areas, including requirements management, project planning, project tracking and oversight, quality assurance, configuration management, and risk management. These key areas have been the focus of our recommendations in reports dating back to 1995. The District has not yet completed action on most of these recommendations and has failed to institute the disciplined approach needed to ensure the successful implementation and management of a financial management system. The District’s difficulties reflect the experience of other entities that have attempted to build a financial management system without first laying a solid foundation. Essential to that foundation is the definition of requirements. A system cannot be counted on to fill needs that have not been clearly defined. When those needs are later identified, retrofitting software can cost significantly more than the same work done during original development. The District continues to develop its system without clearly defined user requirements. Although the District recently received its fourth consecutive unqualified or “clean” audit opinion on its financial statements, the financial information needed by decisionmakers to measure and manage performance requires greater precision and more timely access than that needed to satisfy a financial audit. Furthermore, to continue achieving clean opinions without the support of an efficient financial management system, officials and staff will be forced each year to devote extraordinary effort at the expense of other city government operations. As the city moves toward greater financial independence, the weakness of its financial management system may become increasingly difficult to overcome. As we recently reported in our Executive Guide, to provide meaningful information to decisionmakers, entities must develop systems that support the partnership between finance and operations. Entities must ensure that the systems accurately measure the program costs and that they provide decisionmakers and line managers with timely, accurate financial information on the quality and efficiency of business processes and performance. Entities must also identify their human capital needs by conducting a human capital assessment in order to develop human capital strategies to address current and future risks faced by the entity. Such an assessment is critical to helping entities establish the systems and processes needed to successfully improve financial management and accountability. District officials need to take time now to assess the current status of the city’s financial management system, to identify problems, and to establish a disciplined process to address these problems through the completion of its financial systems implementation. As we discussed in our Executive Guide, financial management improvement needs to be an entitywide priority—in this case, the District—overseen by leadership that is in control of the process and accountable for its success. Financial and program managers need to be able to rely on the system for adequate, timely cost and performance information needed to manage costs, measure performance, make program funding decisions, and analyze outsourcing or privatization options. With such information District decisionmakers will have the tools they need to meet the demands of managing the city’s finances efficiently and serving its citizens effectively. Before moving forward on the implementation of the District’s financial management system, we recommend that the Mayor, in concert with the Chief Financial Officer, take the following actions: Assess the status of current financial management operations, including financial management policies and procedures and systems acquisition and development policy and procedures, and determine whether the current systems have the capability of meeting the District’s financial management needs. Develop an overall concept of operations which clearly articulates overall quantitative and qualitative system characteristics to the user, developer, and other organizational elements and which facilitates understanding of the user organizations, missions, and organizational objectives from an integrated systems point of view. Develop an action plan based on that assessment and the overall concept of operations that addresses any identified weaknesses, including the necessary systems and procedural changes, and that specifies a disciplined process with milestones and clear accountability. Incorporate our prior, open recommendations in the action plan to address the key areas of requirements development and management, project planning, project tracking and oversight, quality assurance, and training as they apply to components of the system that are not yet fully implemented, including the fixed asset module, performance budgeting, personnel and payroll, procurement, integrated tax, and all interfaces. Determine the competencies required at leadership, management, and functional levels for financial and nonfinancial managers and develop appropriate training. Strictly enforce the implementation of the training curriculum and mandate attendance at user training sessions. Conduct an assessment of the District’s human capital needs for financial management in order to strategically develop its financial management team to successfully address the current weaknesses in financial management systems, as well as to support the District’s overall mission, goals, and objectives. Complete the reengineering of the budget process in conjunction with the implementation of a budget and project costing system. In commenting on a draft of this report, the Chief Financial Officer of the District agreed with our recommendations and provided additional details on four areas: (1) our prior recommendations, (2) our recommendation about assessing human capital needs, (3) our recommendation regarding the budget process, and (4) implementation of SOAR. With respect to the CFO’s comments on SOAR, the results of our work showed that the CFO’s characterization of the progress made to date was overly optimistic. The CFO’s comments are reprinted in appendix II. A representative from the Mayor’s Office also reviewed this draft, along with the CFO’s comments, and stated that the Mayor’s Office had no further comments. In regard to implementing our prior recommendations, the CFO stated that the District is taking action as described in a March 12, 2001, letter to us. This letter was in response to our recent request that the District provide us an update on the actions it had taken to address recommendations from our December 1999 report. As part of our ongoing work in the District, we will be evaluating these actions to determine whether they satisfactorily address our prior recommendations. Concerning our recommendation that the District assess its human capital needs, the CFO noted that the District had taken the initial step in conducting a human capital assessment by engaging a professional services firm to help review the District’s organizational structure and identify performance measures and best practices. However, as the CFO noted, the professional services firm review provides the groundwork for the first phase of the CFO’s financial management performance measures program. A complete human capital assessment will be an essential part of the CFO’s improved financial management leadership and support. Although the CFO agreed with our recommendation that the District complete the reengineering of the budget process in conjunction with the implementation of a budget and project costing system, the CFO took exception to certain statements pertaining to our finding. Specifically, the CFO disagreed with our statement that the fiscal year 2002 budget formulation process did not have the benefit of an “implemented financial system gathering and formulating its budget data” and that it would not have “adequate program-level cost and budget results data for fiscal year 2001.” The CFO stated that the fiscal year 2002 budget process did in fact integrate data from several financial systems: CAPPS, UPPS, and SOAR. However, this reliance on compiling data generated from multiple, nonintegrated systems contributed to our finding that the District relies on a cumbersome process to generate financial information. Instead of relying on one unified system to reliably and routinely provide information as needed, the District must compile information from various different systems, which creates inefficiencies and rework. The CFO also stated that the District has an updated timetable and comprehensive plan for fully implementing the SOAR system. However, at the time we had finalized our report, the District had not provided us with a plan. In addition, as discussed in our report, the District’s performance budget module has been put on hold and the fixed asset module is incomplete. Both of these modules are key components of the SOAR system. Further, the implementation of systems that feed into SOAR— personnel and payroll, procurement, and tax —is incomplete and the systems lack electronic interfaces with SOAR. Also, it is uncertain whether the currently envisioned successor to the personnel and payroll system— CAPPS—will be retained or whether an entirely new system is needed, and we were unable to obtain updated timetables and comprehensive plans for the implementation of these key feeder systems. As we recommended, the District needs to formulate a comprehensive plan that includes details of estimated dates, actions needed, and assignment of responsibilities for the completion of these modules and related systems. The CFO also stated that the District’s annual financial statements are an output of the SOAR system and thus reliable, auditable financial data is available from SOAR and the Executive Information System. However, as we noted in our report, the financial information needed by decisionmakers to measure and manage performance requires greater precision and more timely access than that needed to satisfy a financial audit. Financial and program managers need to be able to rely on the system for adequate, timely cost and performance information needed to manage costs, measure performance, make program funding decisions, and analyze outsourcing or privatization options. The CFO also stated that the core SOAR implementation was delivered on schedule and performs as intended. He further noted that the increased costs of the SOAR implementation were directly associated with changes in scope not cost overruns. As we noted in our report, however, the District’s performance budget module has been put on hold and the fixed asset module is incomplete. Both of these modules are key components of the SOAR system. Further, as discussed above, the implementation of systems that feed into SOAR—personnel and payroll, procurement, and tax—is incomplete and the systems lack electronic interfaces with SOAR. As we also discussed, many of the cost increases were also the result of additional requirements and the District not completely identifying user requirements up-front. Finally, in regard to the CFO’s comment that not all SOAR users are required to complete all core modules, according to the SOAR PMO, less than 50 percent of the SOAR user community had completed the new core training curriculum. We are sending copies of this report to Senator Mike DeWine, Senator George Voinovich, Senator Mary Landrieu, Senator Richard J. Durbin, Representative Chaka Fattah, Representative Constance A. Morella, and Representative Eleanor Holmes Norton in their capacities as Chairmen or Ranking Minority Members of Senate and House Subcommittees. We are also sending copies of this report to Anthony A. Williams, Mayor of the District of Columbia; Natwar Gandhi, Chief Financial Officer of the District of Columbia; Charles Maddox, Inspector General of the District of Columbia; Deborah K. Nichols, District of Columbia Auditor; Suzanne Peck, Chief Technology Officer; and Alice Rivilin, Chairman of the District of Columbia Financial Responsibility and Management Assistance Authority. Copies will be made available to others upon request. Please contact me at (202) 512-2600 or Jeanette Franzel at (202) 512-9406 or by e-mail at [email protected] if you have any questions about this report. Other major contributors to this report were Richard Cambosos, Linda Elmore, Maxine Hattery, Jeffrey Isaacs, John C. Martin, Meg Mills, and Norma Samuel. Status as reported by District officials 1. Study the accounting and financial management information needs of the District of Columbia government. Completed. The Authority has (1) performed site visits and benchmarking analysis of accounting and financial management information systems similar to that used in the District; (2) hired a consultant with extensive business process reengineering and systems implementation experience to analyze the District’s financial management information systems implementation effort; and (3) created a System of Accounting and Reporting (SOAR) Steering Committee, headed by the Chair of the Authority, which includes the Mayor, CFO, Chief Technology Officer, Inspector General, and a DC Council member. Action in process. According to the Authority, it is (1) assessing the functionality of the SOAR Performance Budgeting module; and (2) including performance budgeting as an agenda item for the SOAR Steering Committee. See items 3 through 8 below. Recommendation addresses life-cycle support of the software; and (7) develop a written policy for software acquisition planning. 4. Requirements Development and Management: (1) develop an organizational policy for establishing and managing software- related requirements; (2) clearly assign responsibility for requirements development and management; (3) document other resource requirements or resources expended for requirements development activities; (4) develop the capability to trace between contractual requirements and the contractor’s work products; and (5) develop measurements to determine the status of the requirements development and management activities. Status as reported by District officials alternatives and cost-benefit analysis of out-sourcing the data center and upgrading versus replacing the current system. Action in process. The Authority is: (1) emphasizing the importance of implementing and enforcing clear policies and lines of accountability through the SOAR Steering Committee; (2) requiring that the District provide documentation and justification for resources requested or expended; and (3) emphasizing the importance of explicitly linking contractor payments to specific deliverables through the use of work breakdown structures. Action in process. The Authority has emphasized developing quantitative measures of project performance. Examples include emphasis on the internal transactions files and timeliness and accuracy of payroll data. Action in process. The Authority has emphasized to the SOAR PMO the importance of developing clear project deliverables and matching these to costs through the development of work breakdown structures. 5. Project Management: (1) develop a written policy for the execution of the software project; (2) authorize the project manager to independently alter either the performance, cost, or schedule; and (3) require that measurements be taken to determine the status of project management activities. 6. Contract Tracking and Oversight: (1) develop written policy for contract tracking and oversight activities for the financial management system project; (2) support the project team with contracting specialists; (3) require that the project team review the contractor’s planning documents (for example, the project management plan, software risk management plan, software engineering plan, configuration management plan); (4) assign someone responsibility for maintaining the integrity of the contract; and (5) take measurements to determine the status of contract tracking and oversight activities. 7. Evaluation: (1) develop written policy for managing the evaluation of acquired software products and services; (2) develop a documented evaluation plan; (3) develop evaluation requirements in conjunction with system requirements; (4) assess the contractor’s performance for compliance with evaluation requirements; (5) develop measurements to determine the status of evaluation activities; and (6) ensure that the Authority and project manager review the status of evaluation activities. Action in process. Utilizing benchmarking performance management and best practices, the District is establishing a program to ensure adherence to best technology practices for all future critical systems software acquisitions. Policies and plans are being developed within the framework of a long-term technology blueprint. Recommendation 8. Acquisition Risk Management: (1) develop written policy for software acquisition risk management; (2) designate a group to be responsible for coordinating software acquisition risk management activities; (3) define resource requirements for acquisition risk management; (4) ensure that individuals designated to perform software acquisition risk management have adequate experience and training; (5) integrate software acquisition risk management activities; (6) develop a software acquisition risk management plan in accordance with a defined software acquisition process; (7) develop a documented acquisition risk management plan and conduct risk management as an integral part of the solicitation, project performance management, and contract performance management processes; (8) track and control software acquisition risk handling actions until the risks are mitigated; and (9) ensure that risk management activities are reviewed by the Authority and the project manager. Status as reported by District officials Action in process. The Authority has: (1) created the SOAR Steering Committee, responsible for coordinating a variety of activities including software acquisition risk management; (2) emphasized the importance of ensuring that information technology employees within the District are properly screened, certified, and qualified; and (3) hired a consultant to review and assess overall SOAR acquisition and implementation performance, including risk management. 9. Clean up existing data in the financial systems and place special emphasis on ensuring that basic accounting principles and procedures are followed. 10. Establish a process of accountability for implementation of management initiatives. Completed. The District has cleaned up its financial data and has continued to place an emphasis on accounting principles and policies. The OCFO obtained contractual assistance to work with the District agencies, identify required adjustments to the SOAR system balances, and ensure that these adjustments were properly recorded and reflected in SOAR. In addition, the District reestablished the Committee for Financial Excellence charged to build an infrastructure that supports a strong financial base. Action completed. According to the then- Interim Chief Financial Officer, all management reform and the reporting of initiatives is done by the Chief Management Officer. This monthly reporting process captures information by agency, including funding; expense; cost saving; and project activity, including phase, duration, Status as reported by District officials start date, and completion date. See items 11 through 15 below. 11. Develop and maintain a risk management plan. 12. Develop a requirements baseline and obtain agreement between the program office and the system users. According to the Director, Enterprise Office, action is in process to address this recommendation. According to the Director, Enterprise Office, action is in process to address this recommendation. According to the Director, Enterprise Office, action is in process to address this recommendation. 13. Implement a configuration control process to control and document further modifications being made to CAPPS. The process should (1) clearly define and assess the effects of modifications on future product upgrades before the modification is approved, (2) clearly document the software products that are placed under configuration management, and (3) maintain the integrity and traceability of the configuration throughout the system life cycle. 14. Develop and implement a life-cycle support plan, assign responsibility for life- cycle maintenance, and develop an estimate of maintenance and operation costs for CAPPS. 15. Develop and implement a security plan based on a realistic risk assessment of CAPPS security vulnerabilities. According to the Director, Enterprise Office, action is in process to address this recommendation. 16. Develop a centralized file for contract task orders and other contract documentation related to CAPPS. According to the Director, Enterprise Office, action is in process to address this recommendation. According to the Director, Enterprise Office, action is in process to address this recommendation. As discussed in the report body, CAPPS, while not a component of SOAR, is a critical interface to the system. The following are GAO’s comments on the District of Columbia’s April 3, 2001 letter. 1. See the “Agency Comments and Our Evaluation” section of this report. 2. The report has been changed to show 9 instead of 10 training modules. As stated in our report, according to the SOAR PMO, less than 50 percent of the SOAR user community had completed the new core training curriculum. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are also accepted. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St., NW (corner of 4th and G Sts. NW) Washington, DC 20013 Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-mail: [email protected] 1-800-424-5454 (automated answering system) | The District of Columbia is acquiring a new financial management system to improve its accountability over government expenditures. This report assesses the status of the District of Columbia's implementation of important components of this system, including its new core general ledger System of Accounting and Reporting (SOAR). GAO found that although the District is in its fourth year of implementing its new financial management system, essential elements of the system are not yet operational. Two components of SOAR have not been fully implemented: the budget module is on hold, and the fixed assets module is incomplete. The implementation of the systems that feed into SOAR--personnel and payroll, procurement, and tax--is incomplete and the systems lack electronic interfaces with SOAR. Because the financial management system is incomplete, much of the District's financial management and budget information is produced through cumbersome, manual processes and the extraordinary efforts of a few key staff. District officials need to take time to assess the current status of the city's financial system, to identify problems, and to establish a disciplined process to address these problems through the completion of its financial systems implementation. |
In 2005, we reported on key practices to enhance and sustain interagency collaboration. In our report, we broadly defined collaboration as any joint activity that is intended to produce more public value than could be produced when the agencies act alone. We also described how agencies can enhance and sustain their collaborative efforts by engaging in the eight practices identified below: define and articulate a common outcome; establish mutually reinforcing or joint strategies; identify and address needs by leveraging resources; agree on roles and responsibilities; establish compatible policies, procedures, and other means to operate develop mechanisms to monitor, evaluate, and report on results; reinforce agency accountability for collaborative efforts through agency plans and reports; and reinforce individual accountability through performance management systems. We noted that running throughout these practices are a number of factors such as leadership, trust, and organizational culture that are necessary elements for a collaborative working relationship. The highlights page from that report is included in Appendix II. As required by GPRAMA, OMB included a set of 14 interim crosscutting priority goals in the 2013 federal budget. These goals covered a variety of issues such as veteran career readiness, energy efficiency, export promotion, and real property management. OMB also designated relevant agencies and programs that will be responsible for each interim goal. In order to address these goals, OMB is relying on a range of collaborative mechanisms. For example, in order to address the crosscutting goal of improving career readiness of veterans, OMB noted that it will rely, in part, on a Department of Defense-Veterans Affairs Task Force that includes representation from the Departments of Defense, Labor, Education, and Veterans Affairs, OMB, and the Office of Personnel Management (OPM). Federal agencies have used a variety of mechanisms to implement interagency collaborative efforts, such as the President appointing a coordinator, agencies co-locating within one facility, or establishing interagency task forces. Figure 1 catalogues selected mechanisms that the federal government uses to facilitate interagency collaboration, which were identified through interviews with experts and a sample of our prior reports. Experts have defined an interagency mechanism for collaboration as any arrangement or application that can facilitate collaboration between agencies. This list may not be comprehensive; it reflects the mechanisms that were included in our sample. Based on our analysis of expert interviews and literature, as well as a sample of our prior reports, the mechanisms for interagency collaboration can serve the following general purposes. According to our analysis, and as demonstrated by the examples below, most collaborative mechanisms serve multiple purposes. Policy Development: For example, Congress established the Office of Science and Technology Policy in 1976 to serve as a source of scientific and technological analysis and judgment for the President with respect to major policies, plans, and programs of the federal government, among other things. The Office of Science and Technology Policy’s mission includes leading interagency efforts to develop and coordinate sound science and technology policies across the federal government. Program Implementation: As we reported in 2010, in the case of the Federal Emergency Management Agency’s Joint Field Offices, co- locating personnel meets the purpose of program implementation during an emergency. Specifically, personnel from a range of agencies temporarily co-locate to provide services to disaster victims in one location. Pub. L. No. 108-447, Division J, title VIII, 118 Stat. 2809, 3377-3393 (Dec. 8, 2004). handbook with common definitions and implementation policy guidance. Oversight and Monitoring: For example, as we reported in 2008, the Maritime Security Working Group, working on behalf of the Maritime Security Policy Coordination Committee, was responsible for monitoring and assessing implementation of actions related to the National Strategy for Maritime Security. Information Sharing and Communication: As we reported in 2008 and 2010, in the case of the National Intellectual Property Rights Coordination Center, co-locating personnel was intended to promote information sharing. Specifically, personnel from agencies responsible for combating counterfeiting, piracy, and related intellectual property rights crimes are co-located for the purpose of sharing information across organizational boundaries. GAO, Intellectual Property: Federal Enforcement Has Generally Increased, but Assessing Performance Could Strengthen Law Enforcement Efforts, GAO-08-157 (Washington, D.C.: Mar.11, 2008); and Intellectual Property: Agencies Progress in Implementing Legislation, but Enhancements Could Improve Future Plans, GAO-11-39 (Washington, D.C.: Oct. 13, 2010). rotations to 30-minute online courses. The developmental activities we identified included training courses and programs, training exercises, interagency rotational programs, joint professional military education, and leadership development programs. The U.S. Army Command and General Staff College’s Interagency Fellowship Program is an example of one of these professional development activities. The College places Army officers at other federal agencies to learn the culture of the host agency, hone collaborative skills such as communication and teamwork, and establish networks with civilian counterparts. At the same time, participants increase workforce capacity at their host civilian agencies, such as the Department of State and U.S. Agency for International Development. In turn, the civilian agencies can free up resources to send personnel to teach or attend courses at the College. Additionally, in many cases, agencies use more than one mechanism to address an issue. For example, climate change is a complex, crosscutting issue, which involves many collaborative mechanisms. As we reported in 2011, these mechanisms include entities within the Executive Office of the President and interagency groups throughout government, including task forces and working groups. As shown in figure 2 below, the collaborative mechanisms in place to address climate change vary with regard to membership and purpose. The collaboration structures within the Executive Office of the President provide high-level policy direction for federal climate change programs and activities. Other mechanisms are in place—including specially created interagency offices and interagency groups—to provide coordination of science and technology policy across the federal government. For example, the U.S. Global Change Research Program, which began as a presidential initiative in 1989, was codified by the Global Change Research Act of 1990. This program coordinates and integrates federal research on changes in the global environment and their implications for society, and is led by an interagency governing body, the Committee on Environment, Natural Resources, and Sustainability Subcommittee on Global Change Research. The subcommittee, facilitated by a national coordination office, provides overall strategic direction and is responsible for developing and implementing an integrated interagency program. Although the mechanisms we list in figure 2 differ in complexity and scope, they all benefit from certain key features, which raise issues to consider when implementing these mechanisms. According to expert views and our prior work, these key features fall into the categories of outcomes and accountability; bridging organizational cultures; leadership; clarity of roles and responsibilities; participants; resources; and written guidance and agreements. Many of these key features are related to our previously-identified collaboration practices. Have short-term and long-term outcomes been clearly defined? Is there a way to track and monitor progress toward the short-term and long-term outcomes? Do participating agencies have collaboration-related competencies or performance standards against which individual performance can be evaluated? Do participating agencies have the means to recognize and reward accomplishments related to collaboration? Organizational Outcomes and Accountability: As we reported in 2008, we interviewed experts in collaborative resource management. Based on these interviews, we found that most of the experts emphasized the importance of groups having clear goals. They explained that in a collaborative process, the participants may not have the same overall interests—in fact they may have conflicting interests. However, by establishing a goal based on what the group shares in common, rather than on where there is disagreement among missions or philosophies, a collaborative group can shape its own vision and define its own purpose. When articulated and understood by the members of a group, this shared purpose provides people with a reason to participate in the process. For example, in 2012, we reported that Department of Veterans Affairs (VA) and Department of Housing and Urban Development (HUD), in collaboration with other federal agencies, shared a joint commitment to preventing and ending veteran homelessness by 2015. Representatives at two veteran and homeless advocacy organizations told us that sharing a common strategic goal between VA and HUD had been beneficial. Federal agencies can use their strategic and annual performance plans as tools to drive collaboration with other agencies and other partners and establish complementary goals and strategies for achieving results. We have found that agencies that create a means to monitor, evaluate, and report the results of collaborative efforts can better identify areas for Agencies’ priority goals—and agency involvement in improvement. federal government priority goals—provide additional opportunities to articulate the goals of collaborative efforts.required under GPRAMA to monitor the federal government and agency priority goals on at least a quarterly basis, which provides additional opportunities for collaboration with contributing partners. Individual Accountability: Agencies link personal accountability to collaboration by adding a collaboration-related competency or performance standard against which individual performance can be evaluated. As we previously reported, the Department of State revised the competencies used to evaluate the Foreign Service Officers to focus Specifically, the competencies now identify knowledge on collaboration. of other agencies and interagency cooperation among the skill sets to be assessed.increased interest in foreign policy advisor assignments, demonstrated by the increase in the number of applicants to the program in recent years. Agency officials said that this change, in part, resulted in We reported in October 2000, that the Veterans Health Administration’s Veterans Integrated Service Network (VISN), headquartered in Cincinnati, implemented performance agreements that focused on patient services for the entire VISN and were designed to encourage the VISN’s medical centers to work collaboratively. In 2000, the VISN Director had a performance agreement with “care line” directors for patient services, such as primary care, medical and surgical care, and mental health care. In particular, the mental health care line director’s performance agreement included improvement goals related to mental health for the entire VISN. To make progress towards these goals, this care line director had to work across each of the VISN’s four medical centers with the corresponding care line managers at each medical center. As part of this collaboration, the care line director needed to establish consensus among VISN officials and external stakeholders on the strategic direction for the services provided by the mental health care line across the VISN; develop, implement, and revise integrated clinical programs to reflect that strategic direction for the VISN; and allocate resources among the centers for mental health programs to implement these programs. What are the missions and organizational cultures of the participating agencies? What are the commonalities between the participating agencies’ missions and cultures and what are some potential challenges? Have participating agencies developed ways for operating across agency boundaries? Have participating agencies agreed on common terminology and definitions? Different agencies participating in any collaborative mechanism bring diverse organizational cultures to it. Accordingly, it is important to address these differences to enable a cohesive working relationship and to create the mutual trust required to enhance and sustain the collaborative effort. To address these differences, we have found that it is important to establish ways to operate across agency boundaries. This can involve measures such as developing common terminology, compatible policies and procedures, and fostering open lines of communication. We reported in 2012 that the Interagency Council on Homelessness had taken initial steps to develop a common vocabulary for discussing homelessness and related terms, as recommended in our June 2010 report. The Council held a meeting with participants from stakeholder organizations in January 2011 and issued a report to Congress in June 2011 that summarized feedback received during the meeting. The report notes that a common vocabulary would allow federal agencies to better measure the scope and dimensions of homelessness and may ease program implementation and coordination. Additionally, the Council held three meetings in 2011 to discuss implementation of a common vocabulary with key federal agencies. Positive working relationships between participants from different agencies bridge organizational cultures. These relationships build trust and foster communication, which facilitates collaboration. Experts have stated that relationship-building is vital in responding to an emergency. For example, we reported in 2011, that through interagency planning efforts federal officials built relationships that helped facilitate the federal response to the H1N1 influenza pandemic. Department of Health and Human Services’ (HHS) Assistant Secretary for Preparedness and Response and the Centers for Disease Control and Prevention, Department of Homeland Security (DHS), and the Department of Education said that these interagency meetings, working together on existing pandemic and non-pandemic programs, and exercises conducted prior to the H1N1 pandemic built relationships that were valuable for the H1N1 pandemic response. Specifically, HHS officials said that federal coordination during the H1N1 pandemic was much easier because of these formal networks and informal relationships built during pandemic planning activities and exercises. GAO, Influenza Pandemic: Lessons from the H1N1 Pandemic Should Be Incorporated into Future Planning, GAO-11-632 (Washington, D.C.: June 27, 2011). Frequent communication among collaborating agencies is another way to facilitate working across agency boundaries to prevent misunderstanding. We reported in 2005 that open communication was an important factor in the successful transfer of the Plum Island Animal Disease Research Center (Plum Island) from USDA to DHS. Specifically, several scientists at Plum Island had stated that the Plum Island Director’s successful efforts in facilitating open communication among staff had fostered a collaborative environment. Moreover, several scientists noted that the director—who was based on the island at that time—valued the comments and ideas expressed by the scientists. One lead scientist concluded that the director’s ability to establish positive relationships with staff had brought greater focus to the research and diagnostic programs. USDA officials also noted to us that the leadership of the director and the entire Senior Leadership Group, working as a team, contributed to effective cooperation at Plum Island. Has a lead agency or individual been identified? If leadership will be shared between one or more agencies, have roles and responsibilities been clearly identified and agreed upon? How will leadership be sustained over the long-term? Leadership Models: As previously discussed, leadership models range from identifying one agency or person to lead, to assigning shared leadership over a collaborative mechanism. Experts explained that designating one leader is often beneficial because it centralizes accountability and can speed decision making. For example, as we reported in 2007, under the National Pandemic Strategy and Implementation Plan, HHS and DHS share leadership responsibilities for pandemic response. In a pandemic, HHS is responsible for areas such as the public health response, while DHS is responsible for areas such as border security and critical infrastructure protection. In 2007, we reported that it was unclear from the strategy and plan how this shared leadership model would be implemented. In that regard, we recommended that HHS and DHS clarify these roles through tests and exercises. As we reported in 2011, these tests and exercises had not occurred at the start of the H1N1 pandemic and we found that HHS and DHS were not able to effectively coordinate their release of information to state and local governments. Once it became clear that the H1N1 pandemic required primarily a public health response, HHS had responsibility for most of the key activities.that centralized leadership is not always the best model, particularly when the collaboration needs to have buy-in from more than one agency. By sharing leadership, agencies can convey their support for the collaborative effort. However, one expert said Top-level Commitment: Influence of leadership can be strengthened by a direct relationship with the President, Congress, and/or other high-level officials. According to a number of former practitioners we interviewed, their association with the President, members of Congress, or other high- level officials enabled them to influence individuals and organizations within the federal government to collaborate with one another. As we reported in 2008, Department of Energy officials said to us that the fact that the Hydrogen Fuel Initiative was a presidential initiative with congressional backing helped Hydrogen Fuel Initiative managers garner support from industry and within the federal government. Our subsequent work found that the Hydrogen Fuel Initiative worked well as an interagency effort for a number of years and research and development progressed rapidly. However, as one agency official noted, when congressional funding and presidential support waned, so did the program. In developing the interim federal government priority goals required under GPRAMA, a majority of the goal leaders designated by OMB are in the Executive Office of the President, which provides a direct connection to the President. Continuity in Leadership: Given the importance of leadership to any collaborative effort, transitions and inconsistent leadership can weaken the effectiveness of any collaborative mechanism. As we illustrate below, lack of continuity is a frequent issue with presidential advisors or mechanisms that are tied to the Executive Office of the President, particularly when administrations change. As we reported in 2011, the future of the presidentially-appointed Food Safety Working Group was uncertain. We explained that this uncertainty was based on the experience of the former President’s Council on Food Safety, the predecessor to the Food Safety Working Group, which was disbanded less than 3 years after it was created. Research Service, presidential advisors—who are frequently responsible for collaboration around a singular issue—are rarely replaced after they vacate a position, which can leave a void in leadership around an issue. Our prior reports have identified other cases where leadership changed— or was briefly absent—and accordingly, the mechanism either disappeared or became less useful. Have participating agencies clarified the roles and responsibilities of the participants? Have participating agencies articulated and agreed to a process for making and enforcing decisions? GAO, Federal Food Safety Oversight: Food Safety Working Group Is a Positive First Step but Governmentwide Planning Is Needed to Address Fragmentation, GAO-11-289 (Washington, D.C.: Mar. 18, 2011). Clarity can come from agencies working together to define and agree on their respective roles and responsibilities, as well as steps for decision making. We reported in 2009, that as part of the Partnership for Sustainable Communities, HUD and the Department of Transportation started to define and agree on their respective roles and responsibilities. As part of this effort, the agencies began to clarify who will do what, identified how to organize their joint and individual efforts, and articulated steps for decision making. For example, the Department of Transportation and HUD planned to give responsibility to HUD to administer the Regional Integrated Planning Grants program. They also agreed that HUD would assume this responsibility in consultation with the Department of Transportation, the Environmental Protection Agency, and other federal agencies. For purposes of this report, references to the intelligence community elements include the Office of the Under Secretary of Defense for Intelligence, the Defense Security Service, and other intelligence community components, which are subject to the Joint Duty Program requirement. Although the Defense Security Service is technically not part of the intelligence community, it is also included in our scope because Defense Security Service civilian personnel fall under the Under Secretary for Defense for Intelligence and are subject to the Joint Duty Program requirement. personnel within the intelligence community.is helpful to use existing authorities whenever possible. Have all relevant participants been included? Do the participants have: Full knowledge of the relevant resources in their agency? The ability to commit these resources? The ability to regularly attend activities of the collaborative mechanism? The appropriate knowledge, skills, and abilities to contribute? It is important to ensure that the relevant participants have been included in the collaborative effort. This can include other federal agencies, state and local entities, and organizations from the private and nonprofit sectors. Experts said that it is helpful when the participants in a collaborative mechanism have full knowledge of the relevant resources in their agency; the ability to commit these resources and make decisions on behalf of the agency; the ability to regularly attend all activities of the collaborative mechanism; and the knowledge, skills, and abilities to contribute to the outcomes of the collaborative effort. meetings, the Hydrogen and Fuel Cell Technical Advisory Committee recommended in October 2006 that the participants of the Interagency Working Group be elevated to require participation of an assistant secretary or higher. In response, the Department of Energy created the Interagency Task Force—a new entity composed of deputy assistant secretaries, program directors, and other senior officials. How will the collaborative mechanism be funded? If interagency funding is needed, is it permitted? If interagency funding is needed and permitted, is there a means to track funds in a standardized manner? How will the collaborative mechanism be staffed? Are there incentives available to encourage staff or agencies to Have participating agencies developed online tools or other resources participate? If relevant, do agencies have compatible technological systems? that facilitate joint interactions? Collaborating agencies should identify the human, information technology, physical, and financial resources needed to initiate or sustain their collaborative effort. Many experts have emphasized that collaboration can take time and resources in order to accomplish such activities as building trust among the participants, setting up the ground rules for the process, attending meetings, conducting project work, and monitoring and evaluating the results of work performed. Consequently, it is important for groups to ensure that they identify and leverage sufficient funding to accomplish the objectives. As noted below, in some instances specific congressional authority may be necessary in order to provide for the interagency funding of collaborative mechanisms. While not all collaborative mechanisms raise funding considerations, our work does point to a range of authorities that have been used for funding them. The National Defense Authorization Act required VA and the Department of Defense (DOD) to establish the Joint Incentive Fund program to identify and provide incentives for creative coordination and sharing initiatives at the facility, regional, and national levels. To facilitate the incentive program, Congress established a U.S. Treasury account to fund the Joint Incentive Fund activities and required DOD and VA each to contribute a minimum of $15 million each year to the account. This program is authorized through September 2015. Additionally, as we reported in 2011, in the case of the 2009 H1N1 influenza pandemic, Congress appropriated more than $6 billion in direct and contingent funding into an HHS emergency fund in order to prepare for and respond to an influenza pandemic. This appropriation contained authority for the Secretary of HHS to transfer funds to other HHS accounts and to other federal agencies, which the Secretary used to transfer funds to the Departments of Defense, Veterans Affairs, State, and Agriculture to assist with the response. In another example, as we reported in 2007, Federal Executive Boards (FEBs) are supported by a host agency, usually the agency with the greatest number of employees in the region. These host agencies provide varying levels of staffing, usually one or two full-time positions—an executive director and an executive assistant. Some agencies also temporarily detail employees to the FEB staff to assist their local boards and to provide developmental opportunities for their employees. Additionally, the FEBs are supported by member agencies through contribution of funds as well as in-kind support, such as office space, personal computers, telephone lines, and Internet access. We noted in our report that FEBs had previously been limited in the methods available to fund operations because of the governmentwide restriction against interagency financing of boards, commissions, councils, committees, and similar groups without statutory approval. Under this restriction, it was permissible for one participant agency with a primary interest in the success of the interagency venture to pay the entire cost of supporting the functions and administration of the group, but it was not permissible to support the group through cash and in-kind support from participating agencies. FEBs were exempted from this restriction in 1996, which then permitted the interagency financing through member agency contributions of funds and in-kind support. In addition, working capital funds have been used to finance the sharing/leveraging of business-like services between agencies. As we reported in 2010, the National Institute for Standards and Technology (NIST) serves as the focal point for conducting scientific research and developing measurements, standards, and related technologies in the federal government. In 1950, Congress established NIST’s working capital fund, giving the agency broad statutory authority to use the fund to support any activities NIST is authorized to undertake as an agency. NIST’s working capital fund is a type of intragovernmental revolving fund. These funds—which include franchise, supply, and working capital funds—finance business-like operations. An intragovernmental revolving fund charges for the sale of products or services it provides and uses the proceeds to finance its operations. In another example, as we reported in 2011, federal customer agencies use the Department of the Census’ nationwide polling structure, expertise, and address lists, which would otherwise be uneconomical for them to replicate on their own. For example, Census supports HUD’s American Housing Survey by gathering information on the size and composition of the housing inventory in the United States. Regardless of the funding model used, participating agencies need to find compatible methods for tracking funds for accountability. For example, the Mérida Initiative is a partnership between the United States and Mexico to combat narcotics. As we noted in a December 2009 report, tracking funds for the Mérida Initiative was difficult because each of the three bureaus in the Department of State managing Mérida funds had a different method for tracking the money. Each bureau used different budgeting terms as well as separate spreadsheets for the Mérida funds it administered, and the State Department had no consolidated database for these funds. Relying on agencies to participate can present challenges for collaborative mechanisms. In cases where staff participation was insufficient, collaboration often failed to meet key objectives and achieve intended outcomes. According to experts, establishing “win-win” arrangements, and aligning incentives to reward participation, makes individuals and organizations more likely to participate in collaborative arrangements, particularly in cases where participation is voluntary. In a March 2012 report, we identified a number of individual incentives that can be used to bolster participation in collaborative efforts, such as: Factoring participation into promotion decisions: Personnel may be encouraged to participate in collaborative programs if agencies factor interagency experience into their promotion decisions. Providing public recognition: In addition to providing incentives through performance management systems, agencies can publicly acknowledge or reward participants in other ways. For example, agencies could confer awards to individuals who exhibit exemplary teamwork skills or accomplishments during an interagency rotation. GAO-12-386. GAO, Biosurveillance: Developing a Collaboration Strategy Is Essential to Fostering Interagency Data and Resource Sharing, GAO-10-171 (Washington, D.C.: Dec. 18, 2009). data systems compatible with HUD’s as part of their work with the Interagency Council on Homelessness. If appropriate, have the participating agencies documented their agreement regarding how they will be collaborating? A written document can incorporate agreements reached in any or all of the following areas: Leadership; Accountability; Roles and responsibilities; and Resources. Have participating agencies developed ways to continually update or monitor written agreements? Our prior work found that agencies that articulate their agreements in formal documents can strengthen their commitment to working collaboratively. As we have previously reported, having a clear and compelling rationale to work together—such as that described above—is a key factor in successful collaborations. Agencies can overcome significant differences when such a rationale and commitment exist. Not all collaborative arrangements need to be documented through written guidance and agreements, particularly those that are informal. However, we have found that at times it can be helpful to document key agreements related to the collaboration. One expert we interviewed stated that the action of two agencies articulating a common outcome and roles and responsibilities into a written document was a powerful tool in collaboration. Accordingly, we have recommended many times that collaborations would benefit from a formal written agreement, such as a memorandum of understanding (MOU). For example, in 2008, we recommended that the Chairman of the Council on Environmental Quality, working with the Secretaries of Agriculture and the Interior, direct an interagency task force to identify goals, actions, responsible work groups and agencies, and time frames for carrying out the actions needed to implement the Cooperative Conservation Initiative, including collaborative resource management, and document these through a written plan, memorandum of understanding, or other appropriate means. This recommendation was implemented in January of 2009 when the Council on Environmental Quality, and other departments involved in cooperative conservation, signed an MOU to create a framework for collaborative resource management. We have also reported that written agreements are most effective when they are regularly updated and monitored. For example, we reported in 2008, that the Small Business Administration (SBA) and the Rural Development offices of the U.S. Department of Agriculture (Rural Development) entered into an MOU in 2000 that provided an approach to The MOU expired in 2003 and collaborate on rural lending activities. SBA and Rural Development did not appear to have implemented the MOU when it was active. We found that the ineffective implementation of the MOU had likely contributed to the sporadic and limited amount of collaboration that was taking place between the two agencies. GAO, Rural Economic Development: Collaboration between SBA and USDA Could Be Improved, GAO-08-1123 (Washington, D.C.: Sept. 18, 2008). We are sending copies of this report to the appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in enclosure IV. To identify mechanisms that the federal government uses to lead and implement interagency collaboration as well as issues to consider when implementing these mechanisms we conducted a literature review of academic work, interviewed a number of experts in governmental collaboration, and analyzed a sample of our prior work. Specifically, we conducted a literature review of scholarly and peer- reviewed articles, as well as magazine and journal articles. The review relied on Internet search databases to identify literature published or issued between January 2006, and August 2011. The search of the published research databases produced 75 articles. We reviewed these articles to further determine the extent to which they were relevant to our engagement, that is, whether they discussed approaches used by the federal government to lead and implement interagency collaboration or provided definitions of collaborative governance or interagency collaboration. We found that 24 (32 percent) of these documents were relevant to our objectives. Specifically, 11 articles discussed mechanisms used by the federal government to lead and implement interagency collaboration, 5 articles provided definitions of collaborative governance or interagency collaboration, and 8 articles discussed the benefits and challenges of a specific interagency collaborative approach. The remainder of the documents did not meet our criteria because they discussed public-private partnerships, collaboration between state and local government agencies, or collaboration between foreign government agencies. Robert Agranoff – Professor Emeritus, Indiana University Eugene Bardach – Professor Emeritus, University of California, Berkeley G. Edward DeSeve – Former Special Advisor to the President for Recovery Implementation Heather Getha-Taylor – Assistant Professor, University of Kansas Dwight Ink – President Emeritus, Institute of Public Administration and Fellow of the National Academy of Public Administration Frederick Kaiser – Congressional Research Service (retired) John Koskinen – Former Deputy Director for Management of the Office of Management and Budget and Chair of the President’s Council on Year 2000 Conversion Janine O’Flynn – Associate Professor, Australian National University Rosemary O’Leary – Professor, Syracuse University Stephen Page – Associate Professor, University of Washington Barbara Romzek – Professor, University of Kansas Ronald Sanders – Former Chief Human Capital Officer, Office of the Director of National Intelligence Thomas Stanton – Member of the Board of Directors, National Academy of Public Administration, and Fellow of the Center for the Study of American Government at Johns Hopkins University We conducted in-depth interviews with each expert using a standard set of questions. We asked them to comment on a draft list of mechanisms and discussed key issues to consider in implementing collaborative mechanisms. We supplemented the information we received during the interview with information that had been published by the experts. We also met with staff from the Congressional Research Service, who have studied presidential advisors. Additionally, we conducted an analysis of our prior reports that addressed collaborative mechanisms and key implementation issues. To do this we first selected a judgmental sample of reports that were published between January 2005 and August 2011 that contained detailed information regarding collaborative mechanisms. During this search, we identified over 200 reports. In order to reduce the size of the sample, we selected reports that met two or more of the following criteria: discussed collaboration between more than one federal department, included a mechanism for collaboration, and provided an in-depth discussion of the collaborative mechanism. To make our final selection, we identified reports that we generally agreed met the criteria and reached agreement over selection of reports when there was disagreement. To refine the sample and ensure that we covered collaboration across the federal government, we divided the reports by topic area, and selected reports to ensure that each area was covered. The reports fell into the topic areas listed in table 1: We assessed the depth of each report’s discussion on collaborative mechanisms, and constructed a sample to ensure representation of the range of categories above and mechanism types. In total, we selected 36 reports that met our criteria. To identify our final list of collaborative mechanisms, we reviewed the 36 reports in our sample to identify all of the mechanisms, and variations of the mechanisms, that were included. We then organized and grouped the mechanisms according to the main types that we found in our review. For example, we identified three distinct mechanisms that involved positions and personnel details, including interagency collaborator positions, liaisons, and personnel details between agencies. Our goal was to identify and understand the major mechanisms that have been reported in academic literature and our prior work that have examined interagency collaboration. As a result, we did not attempt to identify all possible collaborative mechanisms. After developing a draft list of mechanisms, we shared it with our collaboration experts and practitioners to gather their feedback and identify any additional mechanisms, as discussed above. Five experts agreed that our list of mechanisms was complete, and we made a number of technical changes to the list based on the feedback we received. This engagement had two phases, which required some updating of the sample to include more recent reports. As a result, we used the GAO database to find an additional 100 reports, which were published between August 2011 and June 2012. This brought the total number of reports in our sample to 300. Through this process, we selected an additional 9 reports of the 100, which brought the total number of reports we reviewed to 45. We did not add any mechanisms or key features to the list as a result of this judgmental sample. We relied on this sample to supplement the analysis of key issues to consider in implementing the interagency collaborative mechanisms. To identify the purposes for which collaborative mechanisms can be used, we reviewed our sample of academic literature, discussed the purposes of interagency collaboration in our interviews with experts, and analyzed our judgmental sample of prior work. We found that academic experts and practitioners have used a variety of methods to categorize the purposes of collaborative mechanisms. The purposes we identified in our analysis are supported by a number of experts and our prior work. To identify the categories of the issues for consideration, we identified issues that had been raised in expert interviews and the reports that we reviewed. We selected and organized the issues into the key features that we present in this report based on factors such as the number of times issues were raised, the importance experts attached to issues, and the evidence of their importance that we found in prior GAO work. Additionally, where possible, we looked for areas where there was overlap between the issues that we identified and the practices that we identified in GAO-06-15. While we have generally found that when agencies address as many of these issues as possible it leads to more effective implementation of the collaborative mechanisms, we also recognize that there is a wide range of situations and circumstances in which agencies work together. Consequently, in some cases, addressing a few selected issues may be sufficient for effective collaboration. We conducted our work from July 2011 to September 2012 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this report. Have short-term and long-term outcomes been clearly defined? Is there a way to track and monitor progress toward the short-term and long-term outcomes? Do participating agencies have collaboration-related competencies or performance standards against which individual performance can be evaluated? Do participating agencies have the means to recognize and reward accomplishments related to collaboration? What are the missions and organizational cultures of the participating agencies? What are the commonalities between the participating agencies’ missions and cultures and what are some potential challenges? Have participating agencies developed ways for operating across agency boundaries? Have participating agencies agreed on common terminology and definitions? Has a lead agency or individual been identified? If leadership will be shared between one or more agencies, have roles and responsibilities been clearly identified and agreed upon? How will leadership be sustained over the long-term? Have participating agencies clarified the roles and responsibilities of the participants? Have participating agencies articulated and agreed to a process for making and enforcing decisions? Have all relevant participants been included? Do the participants have: Full knowledge of the relevant resources in their agency? The ability to commit these resources? The ability to regularly attend activities of the collaborative mechanism? The appropriate knowledge, skills, and abilities to contribute? How will the collaborative mechanism be funded? If interagency funding is needed, is it permitted? If interagency funding is needed and permitted, is there a means to track funds in a standardized manner? How will the collaborative mechanism be staffed? Are there incentives available to encourage staff or agencies to participate? If relevant, do agencies have compatible technological systems? Have participating agencies developed online tools or other resources that facilitate joint interactions? If appropriate, have the participating agencies documented their agreement regarding how they will be collaborating? A written document can incorporate agreements reached in any or all of the following areas: Roles and responsibilities; and Resources. Have participating agencies developed ways to continually update or monitor written agreements? In addition to the contact named above, Sarah Veale, Assistant Director, and Mallory Barg Bulman, Analyst-in-Charge, supervised the development of this report. Peter Beck, Martin De Alteriis, Don Kiggins, and Jasmin Paikattu made significant contributions to all aspects of this report. Karin Fangman provided legal counsel. | Many of the meaningful results that the federal government seeks to achievesuch as those related to protecting food and agriculture, providing homeland security, and ensuring a well-trained and educated workforcerequire the coordinated efforts of more than one federal agency and often more than one sector and level of government. Both Congress and the executive branch have recognized the need for improved collaboration across the federal government. The Government Performance and Results Act of 1993 (GPRA) Modernization Act of 2010 establishes a new framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. Effective implementation of the act could play an important role in facilitating future actions to reduce duplication, overlap, and fragmentation. GAO was asked to identify the mechanisms that the federal government uses to lead and implement interagency collaboration, as well as issues to consider when implementing these mechanisms. To examine these topics, GAO conducted a literature review on interagency collaborative mechanisms, interviewed 13 academic and practitioner experts in the field of collaboration, and reviewed their work. GAO also conducted a detailed analysis of 45 GAO reports, published between 2005 and 2012. GAO selected reports that contained in-depth discussions of collaborative mechanisms and covered a broad range of issues. Federal agencies have used a variety of mechanisms to implement interagency collaborative efforts, such as the President appointing a coordinator, agencies co-locating within one facility, or establishing interagency task forces. These mechanisms can be used to address a range of purposes including policy development; program implementation; oversight and monitoring; information sharing and communication; and building organizational capacity, such as staffing and training. Frequently, agencies use more than one mechanism to address an issue. For example, climate change is a complex, crosscutting issue, which involves many collaborative mechanisms in the Executive Office of the President and interagency groups throughout government. Although collaborative mechanisms differ in complexity and scope, they all benefit from certain key features, which raise issues to consider when implementing these mechanisms. For example: Outcomes and Accountability: Have short-term and long-term outcomes been clearly defined? Is there a way to track and monitor their progress? Bridging Organizational Cultures: What are the missions and organizational cultures of the participating agencies? Have agencies agreed on common terminology and definitions? Leadership: How will leadership be sustained over the long-term? If leadership is shared, have roles and responsibilities been clearly identified and agreed upon? Clarity of Roles and Responsibilities: Have participating agencies clarified roles and responsibilities? Participants: Have all relevant participants been included? Do they have the ability to commit resources for their agency? Resources: How will the collaborative mechanism be funded and staffed? Have online collaboration tools been developed? Written Guidance and Agreements: If appropriate, have participating agencies documented their agreement regarding how they will be collaborating? Have they developed ways to continually update and monitor these agreements? |
Congress enacted the Coastal Zone Management Act in 1972 to balance the often competing demands for economic growth and development with the need to protect coastal resources. To accomplish the goals of the act, Congress established a framework for a voluntary federal and state coastal management partnership, the CZMP. The CZMP represents a unique federal-state partnership for protecting, restoring, and responsibly developing the nation’s coastal communities and resources, according to program documents. The act identifies specific goals for state programs that fall into six broad focus areas ranging from protecting and restoring coastal habitat to assisting with coastal community development efforts and improving government coordination and decision making (see table 1). States must submit comprehensive descriptions of their coastal management programs—which must be approved by the states’ governors—to NOAA for review and approval. As specified in the act, states must meet the following requirements to receive NOAA’s approval for their state programs, among others: designate coastal zone boundaries that will be subject to state define what constitutes permissible land and water uses in coastal propose an organizational structure for implementing the state program, including the responsibilities of and relationships among local, state, regional, and interstate agencies; and demonstrate sufficient legal authorities to carry out the objectives and policies of the state program, including the means by which a state will regulate land and water uses, control development, and resolve conflicts among competing activities in coastal zones to ensure their wise use. The act provides states the flexibility to design programs that best address states’ unique coastal challenges, laws, and regulations, and participating states have taken various approaches to developing and carrying out their programs. For instance, there are generally two organizational structures used by states to implement their programs: (1) networked programs, which rely on multiple state and local agencies to implement their programs, and (2) non-networked, or comprehensive state programs that administer all aspects of the program through a single centralized agency. The coastal management activities carried out also vary across states with some states focusing on permitting, mitigation, and enforcement activities, while other states focus on providing technical and financial assistance to local governments and nonprofits for local coastal protection and management projects. If states make changes to their programs, such as changes in their coastal zone boundaries or organizational structures, the states must submit those changes to NOAA for review and approval. The act includes two primary incentives to encourage states to develop coastal management programs and participate in the CZMP. First, participating states are eligible to receive federal funding from NOAA to support the implementation and management of their programs, which the agency receives annually through congressional appropriations. In fiscal year 2013, NOAA awarded participating states a total of approximately $61.3 million, a 9 percent decline from fiscal year 2008 awards, when it awarded just over $67.5 million across participating states. NOAA awards CZMP funding to individual states across three fund types—administrative, enhancement, and coastal nonpoint program—according to requirements in the act (see table 2). The majority of funding NOAA awards through the CZMP is administrative funding. Administrative funding, which requires state matching funds, supports general implementation of the state’s coastal management program. Under the act, NOAA may also award a maximum of $10 million annually in enhancement program funding to participating states. Enhancement funding is to be used by states to develop program changes, or enhancements, to their NOAA-approved programs in one or more of nine enhancement objectives specified in the act, as listed in table 2. In addition, Congress has generally provided direction on the total amount of funds to be awarded through the coastal nonpoint program to assist with states’ coastal nonpoint pollution control programs, which are programs to ensure states have necessary tools and enforceable authorities to prevent and control polluted runoff in coastal areas. According to NOAA officials, funding has not been provided for this program since fiscal year 2009, when nearly $3.4 million was awarded to states. States may also use other sources of funding for their coastal nonpoint pollution control programs, including administrative and enhancement funding. Second, federal agency activities in or affecting the uses or resources of a participating state’s defined coastal zone are required to be consistent to the maximum extent practicable with enforceable policies of the state’s program. Under this provision, known as federal consistency, states with approved programs must have the opportunity to review proposed federal actions for consistency with enforceable policies of their state programs. Types of federal actions that may be reviewed by states include federal agency activities, such as improvements made to a military base; licenses or permits to nonfederal applicants; financial assistance to state and local governments; and outer continental shelf activities, such as oil and gas development. If a state finds that a federal activity is not consistent with the state’s enforceable policies, the state can object to the activity and work with the federal agency to resolve any differences between the proposed activity and state policies. All participating state programs have developed federal consistency review processes. Thirty-four out of 35 eligible states have federally approved coastal management programs (see fig. 1). Most state programs have been in existence for more than 30 years, with the earliest program approved in 1976, and 29 states having received federal approval for their programs by 1986. The most recent state to begin participating in the program is Illinois, which received federal approval in January 2012. NOAA’s Office of Ocean and Coastal Resource Management (OCRM) is responsible for general administration and oversight of the CZMP. NOAA plans to merge the OCRM with its Coastal Services Center—an office that provides coastal-related mapping tools and data; training on various coastal management issues such as climate adaptation and coastal restoration design and evaluation; and technical and other assistance to local, state, and regional coastal organizations—into a single office by the end of 2014. Under the current and planned office structure, NOAA officials are responsible for approving state programs and any program changes, administering federal funding to the states, providing technical assistance to states such as on the development of 5-year assessment and strategy reports that identify states’ priority needs and projects to address one or more of nine enhancement objectives required for enhancement funding, among other topics, and managing the CZMP performance measurement system. NOAA assigns coastal management specialists to work with individual state programs. As part of its administration of the program, NOAA evaluates program performance using its CZMP performance measurement system. NOAA began developing a framework for this performance measurement system in 2001, started piloting it in 2004, and fully implemented the system by 2008. The system consists of 15 performance measures that generally correspond with the goals of the act, and two additional measures to track state financial expenditures. The 17 total performance measures incorporate individual data elements, plus additional subcategories of information that state programs collect and report into the system annually (see app. II). In addition, NOAA evaluators, who are in a different NOAA division than specialists, are responsible for conducting individual state program evaluations, which are required under the act. State program evaluations are designed to examine the extent to which states have: (1) implemented their approved programs,(2) addressed coastal management needs identified in the act, and (3) adhered to the terms of CZMP funds awarded through cooperative agreements. NOAA’s state program evaluation reports identify state accomplishments and make recommendations for improving states’ programs. NOAA’s recommendations are classified as either necessary actions—actions a state must take by a specific date such as the next regularly scheduled evaluation—or program suggestions—actions it believes a state should take to improve its program. NOAA may withdraw approval for a state’s program and financial assistance in cases where states do not address necessary actions. NOAA has not withdrawn approval for a state program as of the end of fiscal year 2013 and, according to NOAA officials, few necessary actions have been identified in past state evaluations. In 2008, we examined NOAA’s process for awarding financial assistance to states and how the agency evaluated the effectiveness of the CZMP. Of the seven recommendations we made in 2008, NOAA disagreed with one recommendation that the agency develop performance measures to evaluate the effectiveness of state programs in improving processes; NOAA agreed with the other six recommendations and has taken some actions to address them as described in table 3. During fiscal years 2008 through 2013, the 34 participating states allocated a total of nearly $400 million in CZMP funds for a variety of activities, generally related to the broad goals for state programs outlined in the Coastal Zone Management Act. Each year, NOAA analyzes its cooperative agreements with states for CZMP funding, and categorizes the states’ CZMP funding allocations as they correspond with the six focus areas based on the broad goals in the act, along with a seventh category to capture state program administrative costs, such as general program operations, supplies, and rent. According to NOAA’s analysis, during fiscal years 2008 through 2013, states’ allocations of CZMP funds varied across the seven categories, with about half concentrated in support of activities related to two focus areas, government coordination and coastal habitat (see fig. 2). NOAA officials told us that, while states have the flexibility to design and implement programs that best meet their unique needs, the agency does influence how states allocate CZMP funds through (1) NOAA’s review and approval of states’ 5-year assessment and strategy reports required for enhancement funding in which participating states prioritize projects that support program improvements and (2) NOAA’s periodic state program evaluations in which NOAA outlines necessary actions or makes program suggestions that can influence state program activities. NOAA officials said that they also informally shape or influence state program activities through ongoing discussions with state program officials about funding proposals or specific projects, such as how projects might be adjusted to address NOAA priorities. Examples of activities for which participating states allocated CZMP funds during fiscal years 2008 through 2013 in each of the six focus areas include the following: Government coordination. States allocated CZMP funds for activities including state and regional planning efforts that involve coordination among multiple levels of government and stakeholders to address complex and controversial coastal issues, such as comprehensive planning of ocean and nearshore areas, energy facility siting or special area management planning; federal consistency activities; technical assistance to local governments; and public outreach and education on coastal issues including website development and publications about a state program’s activities. According to NOAA’s analysis of cooperative agreements with states for CZMP funding, states allocated the largest amount of CZMP funding during the 6-year period—about 27 percent of total funding— to government coordination activities. We found that a number of state programs use CZMP funds to support participation in regional organizations involving ocean planning activities that entail coordination across federal, state, and local governments. For example, state program officials in some Northeast and Mid-Atlantic states participate in regional organizations, such as the Northeast Regional Ocean Council and Mid-Atlantic Regional Council on the Ocean, that have ocean resource data collection and planning efforts under way. We also found that most states we reviewed provide some type of technical or financial assistance to local governments to support local level coastal management activities and projects. Protecting Coastal Habitat in Texas The Texas state program used coastal zone funds to support a multiyear marsh restoration project on the Texas Gulf Coast near Corpus Christi. Over the past 60 years, about 340 acres of coastal marsh habitat were lost due to the construction of an adjacent highway and subsequent erosion. A local nonprofit organization began restoring the marsh in 2005. The project involved scooping sand, clay, and shells from the bay bottom and piling the material into terraces and mounds; planting native grasses on the terraces to stabilize the structures and provide habitat; and constructing an outer rock berm to protect the new marsh area from strong waves in the bay, as shown below. Project officials told us Texas’s state program provided about $1 million in coastal zone funding, about 20 percent of the project’s total cost, to the nonprofit organization responsible for the project. Other funding to carry out the project was provided by the EPA, U.S. Fish and Wildlife Service, state government sources, and grants from private foundations. According to project officials, the project was completed in spring 2014 and has resulted in 160 acres of restored marsh that provide habitat for fish, crabs, shrimp, nesting birds, sea grass, and other plants and animals. The project also resulted in the creation of new opportunities for public recreation, such as fishing and kayaking, and the marsh protects the adjacent highway from coastal hazards, such as storms, according to project officials. Coastal habitat. States allocated CZMP funds for coastal habitat protection and restoration activities including the acquisition or placement of easements on coastal lands; restoration of coastal habitats; data collection and mapping of coastal habitats; development of plans for habitat acquisition, restoration, and other habitat management needs; implementation of permitting and enforcement programs that protect coastal habitat through planning and regulation of development; or support of land management programs such as those for coastal preserves and parks. States also allocated CZMP funds for public outreach and education activities that focused on coastal habitat protection and restoration. According to NOAA’s analysis, approximately 24 percent of CZMP funds awarded during fiscal years 2008 through 2013 were allocated to coastal habitat protection and restoration activities. According to NOAA’s CZMP performance measurement system data from 2008 through 2013, states reported that they used CZMP funds to protect nearly 23,300 acres of coastal habitat through acquisition or easement, restore nearly 37,400 acres of coastal habitat, and through regulatory programs protect more than 123,000 net acres of coastal habitat. Coastal hazards. States allocated CZMP funds for activities that help coastal communities minimize risks from coastal hazards, such as storms, tsunamis, and sea-level rise, and improve hazard awareness and understanding. Such activities include assessment and planning efforts, such as developing mitigation plans, risk and vulnerability assessments, and data collection and mapping to identify and manage development in areas vulnerable to coastal hazards; implementation of hazard mitigation projects; implementation and enforcement of hazard policies, regulations, and requirements; and education and training on coastal hazard topics. According to NOAA’s analysis of cooperative agreements with states for CZMP funding, about 13 percent of CZMP funds awarded in fiscal years 2008 through 2013 were allocated for coastal hazards projects. The coastal hazards focus area was the one focus area where the share of CZMP funds allocated steadily increased over the 6-year period, from roughly 7 percent in fiscal year 2008 to about 16 percent in fiscal year 2013. Most state program officials we spoke with identified their work to help communities reduce future damage from hazardous events and impacts from sea-level rise related to climate change as among their more significant projects. NOAA also identified coastal hazards work as a priority area, and in 2011, through the agency’s funding guidance, began encouraging states to use CZMP funding for projects that improve the resiliency of coastal communities to adapt to the impacts of coastal hazards and climate change. In addition, many of the projects that were awarded funding under the competitive Projects of Special Merit Program in fiscal years 2012 and 2013 were identified by states as addressing, at least in part, coastal hazards, according to NOAA officials. For example, South Carolina’s project to study tidal inlet dynamics and erosion and Maine’s adaptation planning project for its coastal parks both addressed coastal hazard issues. NOAA’s CZMP performance measurement system data for 2008 through 2013 show that states reported working with more than 410 communities to reduce risks from coastal hazards and nearly 230 communities to improve public awareness of coastal hazards issues. A Coastal Water Quality Monitoring and Modeling Project in Florida Estuaries—such as Sarasota Bay, that spans about 56 miles along the southwest Florida coast—are important productive ecosystems that provide habitat for a diversity of species. Nonpoint source pollution carried through runoff influences the health of the Sarasota Bay, which has limited tidal flushing, no major tributary, and receives most of its freshwater from rainfall and associated runoff. Florida’s coastal management program provided nearly $150,000 in coastal zone funds to support a multiyear water quality monitoring and modeling study in Sarasota Bay led by the Florida Fish and Wildlife Research Institute. The study was designed to help determine major factors affecting the ecological health of the bay. Specifically, coastal zone funding was used for statistical modeling to differentiate between the effects of polluted runoff into the bay during storm events from the effects of natural algal, or other natural sources of nutrients, in the bay. Florida state program officials told us that understanding ecological responses in estuaries can facilitate planning to minimize potential impacts and help maintain overall ecosystem health. Continued water quality monitoring and modeling is being completed in the bay with other funding sources, according to Florida officials. Coastal water quality. States allocated CZMP funds for water quality permitting and enforcement activities such as permitting of storm water discharges; activities and projects related to water quality management including vegetative plantings or other nonstructural shoreline erosion control projects; water quality monitoring; activities and projects for local governments to improve water quality management; technical assistance, data collection, mapping, planning, and policy development to address water quality issues; marine debris and other coastal cleanup or pollution prevention programs; and projects and activities that provide technical assistance to marinas to reduce nonpoint source pollution; and public outreach and education on water quality issues. Activities include those that support states in implementing their coastal nonpoint source pollution control programs. According to NOAA’s CZMP performance measurement system data, from 2008 through 2013, states reported that they worked with more than 680 communities to develop nonpoint source pollution management policies and plans, or complete related projects, and removed 27 million pounds of marine debris through coastal cleanup activities. Coastal community development. States allocated CZMP funds for activities including planning and construction to support the redevelopment of urban waterfronts, ports, and harbors; technical assistance to local governments related to waterfront redevelopment; community planning, land-use planning, green infrastructure planning, and other sustainable development efforts; and public outreach and education activities specific to coastal community development issues. According to CZMP performance measurement system data from 2008 through 2013, states reported that they worked with more than 580 coastal communities to promote development and growth in ways that protect coastal resources and with more than 250 communities to redevelop ports and waterfronts. Public access. States allocated CZMP funds for activities including creating new public access sites through easements or right of ways; enhancing existing public access through trails, handicap features, or educational signage; developing plans, collecting data, and providing technical assistance to local governments on public access planning; and conducting public outreach and education activities on public access issues. According to NOAA’s analysis, states allocated the least amount of CZMP funding (about 6 percent of total CZMP funding) for activities that improve public access to the coast. Unlike other focus areas, a number of states did not allocate funds for public access. According to NOAA officials, some states may not need to use CZMP funding to support public access projects, for example, because they already have sufficient public access to coastal areas. In total, according to CZMP performance measurement system data from 2008 through 2013, states reported that with CZMP funds and through regulatory programs they helped create nearly 700 new public coastal access sites and helped enhance nearly 1,500 existing sites. State program officials told us that CZMP funding is important because it can help leverage other financial resources and provides sustained, multiyear funding for projects. We found that CZMP-funded projects and activities often involved partnerships with various entities and used multiple sources of funding. According to state program officials, CZMP funds were often the catalyst for obtaining additional financial assistance or other resources. For example, we visited a $5.2 million, multiyear marsh restoration project along the Texas Gulf coast that received nearly 20 percent of overall project funding through the CZMP and additional financial support from eight other federal, state, and private sources. Representatives from the nonprofit organization responsible for managing the project told us that CZMP funds received during the initial stages helped attract other funding partners needed for such a large-scale restoration project. Similarly, Virginia’s program used $6,000 of its CZMP funding to leverage staff from six partner organizations to plan and conduct a Marine Debris Summit that laid the groundwork for developing a marine debris plan and establish priorities for future work, which state program officials expect will serve as a model for other Mid-Atlantic states. Most of the state programs we reviewed also provide competitive grants or offer other assistance to leverage local resources to address coastal issues. For example, Florida’s program competitively awards a portion of its administrative funds annually through grants to coastal counties and municipalities for projects that help communities address a wide range of coastal issues, and these grants require local entities to match the state grants. Similarly, Maine’s program uses CZMP funds annually to provide competitive grants to coastal communities for planning activities that support harbor management and development or improve shoreline access, but actual implementation of the projects must be funded through other sources. NOAA’s two primary performance assessment tools, the CZMP performance measurement system and its state program evaluations, have limitations, even with changes NOAA has made since 2008, and NOAA uses the performance information it collects to a limited extent in managing the CZMP. We found that NOAA’s CZMP performance measurement system does not align with some key attributes of successful performance measures. In addition, in its method for selecting stakeholders to survey during state program evaluations, NOAA may be susceptible to collecting incomplete and biased information because, in part, it uses a single criterion to select stakeholders to survey. Furthermore, NOAA makes limited use of the performance information it collects—for instance, NOAA does not use data from its performance measurement system or its evaluations of state programs to improve implementation of the CZMP at the national level—and, as a result, may not be realizing the full benefit of collecting such information. NOAA’s CZMP performance measurement system, which the agency developed in response to congressional direction to assess the national impact of the CZMP, has limitations, even with changes the agency made to the system since our 2008 report. Specifically, NOAA has made changes to several aspects of the data collection and review components of its system, including the following: establishing a requirement, in 2010, that state programs submit documentation of source information to support their data submissions, such as documentation of the public access sites being reported for public access performance measures; refining, in 2009, 2010, and 2011, the names and definitions of some performance measures with the intention of clarifying the activities that a given measure is intended to capture; and issuing internal guidance, in 2010, for NOAA staff to review state- submitted data and accompanying documentation to ensure that only eligible activities are reported by the states, among other things. With these changes, the system aligns with some key attributes of successful performance measures. In our past work, we found that successful performance measures typically align with key attributes including reliability, clarity, balance, numerical targets, and limited overlap, among others (see app. III for a complete list of key attributes we identified). In our current review, we found that some of the changes NOAA made to its CZMP performance measurement system since 2008 are consistent with such key attributes. For example, NOAA’s requirement that state programs submit documentation of source information and its internal guidance for how staff are to review this documentation correspond with the key attribute of ensuring the reliability of performance measures. In addition, NOAA’s steps to refine the names and definitions of certain performance measures are demonstrative of the key attribute of clarity, meaning that measures are clearly stated and have names and definitions consistent with the methodology used to calculate them. On the other hand, we found limitations in the CZMP performance measurement system that did not align with the key attributes. For instance, in 2011, NOAA eliminated its coastal water quality focus area— corresponding to one of the six focus areas based on goals of the CZMP outlined in the act. In eliminating this focus area, NOAA removed five related performance measures; states continue to report on one measure related to coastal water quality, but do so under another focus area on coastal community development. Balance, or having a set of measures that cover a program’s various goals, is a key attribute of successful performance measures. We found that having measures that correspond to various program goals provided agencies with a complete picture of performance. NOAA officials indicated that they eliminated the coastal water quality focus area based on a 2011 performance measurement system workgroup’s recommendation to streamline the measurement system. They further explained that they took this action because state programs were no longer receiving coastal nonpoint program funding, which often funded activities in support of coastal water quality, and that activities under this focus area were often tied to the coastal community development focus area. In speaking with some state program officials, however, we found that improving coastal water quality remains a priority for their programs even without coastal nonpoint program funding. Similarly, representatives from the Coastal States Organization’s coastal water quality workgroup indicated that many state programs have made progress in developing and implementing coastal nonpoint pollution control programs, but that these results are not quantified by NOAA. In addition, NOAA has not established numerical targets for the measures in its CZMP performance measurement system for the purpose of tracking progress or assessing performance of the CZMP. Our past work found that numerical targets are a key attribute of successful performance measures because they allow managers to compare planned performance with actual results. In 2008, we recommended that NOAA establish numerical targets for performance measures that would help track progress toward meeting program goals and help assess the overall CZMP effectiveness. NOAA’s 2011 performance measurement system workgroup also recommended that NOAA set targets to help it more effectively measure and communicate CZMP performance. NOAA agreed with these recommendations, but it has not established numerical targets for the measures in its CZMP performance measurement system to assess CZMP performance. NOAA officials explained that state programs vary widely, making it difficult to set targets at the national level. Officials also said that they first need to review the performance measures before they assess the feasibility of developing numerical targets. NOAA officials added that NOAA has set numerical targets for four CZMP performance measures, which are included in Commerce’s department-wide goals related to environmental stewardship. NOAA officials told us that they considered historical performance measure data and state programs’ planned strategies when establishing these targets, but they do not use them to assess CZMP performance. We continue to believe that, without setting numerical targets for the CZMP performance measurement system, NOAA will not have a benchmark to help it determine the extent to which the CZMP may be meeting expectations. Finally, the CZMP performance measurement system includes performance measures that involve the collection of data by state programs that are already available to NOAA from other sources. Limited overlap, another key attribute of successful performance measures, notes that measures should produce new information beyond what is provided by other data sources and that redundant or unnecessary performance information costs resources and clouds the bottom line by making managers sort through excess information. We found that the CZMP performance measurement system includes at least two financial measures whereby states collect and submit financial expenditure data similar to data states already provide NOAA through their cooperative agreements. NOAA officials told us that, in developing the CZMP performance measurement system, they anticipated that including such measures would be useful for tracking the amount of CZMP funding used in different focus areas each year. However, NOAA used the financial information from its CZMP performance measurement system to prepare a one-time summary of performance measure data published in 2013. In contrast, it uses financial information drawn from cooperative agreements on an annual basis to analyze states’ planned uses of CZMP funding. NOAA officials acknowledged that they may need to review the utility of requiring state programs to collect financial expenditure data for the performance measurement system. By requiring states to collect and submit financial data similar to data that they already provide in their cooperative agreements and making limited use of these data, NOAA may be unnecessarily burdening state programs with data collection requirements. Several state program officials we interviewed told us that collecting data for the numerous data elements under the 17 performance measures is a time- and resource-intensive activity, with a few stating that this is particularly true relative to the amount of CZMP funds they receive. Some indicated, for instance, that they spend 30 staff days or more per year collecting these data. State officials said that, in particular, data for the financial measures are among the most time-consuming to collect and report to NOAA. Other state officials told us that collecting data on the number of educational and training events and participants for each focus area is especially time-consuming, with one official noting that collecting data on number of participants is particularly burdensome when events are hosted by parties other than the program itself. NOAA officials told us they recognized the need to continue to review and potentially streamline or revise the CZMP performance measurement system, and that they intend to do so once the merger of OCRM and the Coastal Services Center is complete, which they expect to occur by the end of 2014. In the interim, NOAA officials said they initiated at the beginning of fiscal year 2014 an effort to assess all performance measures collected by the various programs within the two offices, including the CZMP, to determine which measures may be most effective in tracking and communicating progress toward goals identified in the merged office’s strategic plan. NOAA officials said they are committed to developing a strong framework for evaluating the performance of all programs under its merged coastal management office. However, the agency has not documented the approach it plans to take for these efforts. Federal internal control standards state the need for federal agencies to establish plans that encompass actions the agency will take to help ensure goals and objectives can be met. Without a documented approach for how it plans to assess its CZMP performance measurement system—including the scope and criteria it will use, such as how it will ensure its measures align with key attributes of successful performance measures—NOAA cannot demonstrate that its intended effort will improve its CZMP performance measurement system. In 2013, NOAA revised its process for conducting state program evaluations, which are required under the Coastal Zone Management Act to assess state programs’ adherence to the act’s requirements, but we identified a limitation in NOAA’s method for sampling stakeholders under the revised process. According to NOAA documents, the purpose of the revisions was to conduct evaluations more efficiently, at a reduced cost, while continuing to meet evaluation requirements outlined in the act. In revising its state program evaluations, NOAA made changes in the timing and methods for collecting information from participating states (see table 4). A NOAA official estimates that the agency’s revised evaluation process will save the agency approximately $236,000 annually. NOAA began evaluating state programs using its revised process at the beginning of fiscal year 2014 with evaluations of seven state programs. We did not evaluate NOAA’s implementation of its revised state program evaluations because NOAA had not completed its first cycle at the time of our review and, therefore, it was too early to assess the effectiveness of its revisions. However, we did assess NOAA’s revised evaluation design against our and others’ work on program evaluations to identify standards for strong evaluation design. We were unable to evaluate the qualitative components of its revised evaluation design—including the change in the scope of the evaluations from NOAA’s review of all aspects of each state program to a review of a few areas determined by NOAA—because the results of using these methods cannot be fully assessed until the evaluations have been conducted. But, we did evaluate the steps NOAA laid out in its guidance on its methods for collecting information and identified a limitation in its method for sampling stakeholders to survey. Under its revised evaluation process, NOAA relies in part on information obtained through stakeholder surveys, but we found that through its method of sampling stakeholders to survey, the agency may be susceptible to collecting incomplete and biased information. According to NOAA guidance on its revised evaluations, stakeholder surveys are intended to provide information about stakeholders’ perspectives and opinions across a range of topics, from a state program’s top three strengths and weaknesses to opportunities for improving a program’s federal consistency and permitting processes. The guidance states that NOAA will use stakeholder survey responses to identify evaluation target areas, as well as obtain information about the extent to which a state program is performing effectively in areas outside of the target areas. NOAA officials indicated that they plan to analyze survey results by collating respondents’ answers to identify common themes. NOAA evaluators will identify a sample of stakeholders to survey from 12 categories of organizations that stakeholders represent, including federal agencies, state agencies, nonprofit organizations, academic institutions, and local businesses and industries. According to NOAA officials, they adopted the criterion of stakeholder categories to ensure that stakeholders whose views were not consistently represented in the former evaluations—such as those from local businesses and industries—are included in evaluations conducted under the revised process. NOAA evaluators will select stakeholders to survey from these 12 categories from a list of potential stakeholders to survey compiled by state program officials and NOAA specialists working with the state. According to the Office of Management and Budget’s Standards and Guidelines for Statistical Surveys, a survey sampling method should yield the data required to meet the objectives of the survey. Our previous work has found that strong program evaluations rely on data that sufficiently reflect the activities and conditions a program is expected to address. Because NOAA’s stakeholder sampling method is guided by one criterion—categories of stakeholder organizations—NOAA may not collect information that reflects the various activities and aspects of the state programs. Specifically, under the act, NOAA is required to evaluate the extent to which state programs have addressed coastal management needs reflecting the six focus areas based on the goals identified in the act. In the absence of additional criteria for selecting stakeholders to survey, NOAA may select a sample of stakeholders whose work with a state program does not span all of the act’s goals, potentially leaving NOAA without information to inform its evaluation of a state’s performance on one or more goals. Such an information gap could be significant because stakeholder surveys are intended to be a main source of information on how well a program is performing in areas beyond those identified as target areas. Furthermore, when using a nonprobabilistic sampling method, such as that being employed by NOAA for its stakeholder surveys, the Office of Management and Budget’s survey guidelines state that agencies should demonstrate that they used an impartial, objective method to include or exclude people or organizations from a sample. Our previous work on program evaluation also found that evaluation data should be sufficiently free of bias or other errors that could lead to inaccurate conclusions. Because state program officials responsible for identifying potential stakeholders to survey have a vested interest in their programs, NOAA’s process is susceptible to collecting biased information. NOAA specialists who work with state programs also contribute to the selection process. However, we found that some NOAA specialists are not regionally located or have worked with a state program for a short period of time and, therefore, their knowledge or experience to inform the selection process may be limited. NOAA’s evaluation guidance recognizes the need to assess its revised process in the future and states that the agency plans to evaluate the effectiveness and efficiency of its revised state program evaluation process after conducting 8 to 10 evaluations. We found that in managing the CZMP, NOAA makes limited use of the performance information it collects. Our past work has found that performance information can be used across a range of management functions to improve programs and results, including to (1) identify problems or weaknesses in programs and take corrective actions, (2) set program priorities and develop strategies, (3) recognize and reward organizations who meet or exceed expectations, and (4) identify and share effective approaches to program implementation. For example, our previous work found that the Department of Labor effectively used performance measure data to identify technical assistance needs of state programs and to then provide assistance to try to improve performance. The department also used performance measure data as a basis for providing financial incentives to state programs that receive federal grants. We found that agencies realize the full benefit of collecting performance information only when they use such information to make decisions designed to improve results. NOAA collects performance information through its CZMP performance measurement system, state program evaluations, and other sources, but we found that the agency generally does not use the information it collects to help manage the CZMP at a national level. Specifically, we found the following: NOAA uses its CZMP performance measurement system data to report on national program accomplishments on a limited basis. In particular, in 2013, NOAA produced one report summarizing performance measurement system data from 2008 through 2011. However, NOAA has not published additional similar reports, and has not used performance measurement system data for other purposes. For example, the agency has not used the performance measurement system data to identify potential problems or weaknesses in the CZMP, set program priorities or strategies, or recognize and reward high-performing state programs—which may limit the usefulness of collecting such data. NOAA does not use its state program evaluations to assess the performance or improve the implementation of the CZMP at the national level. NOAA uses its state program evaluations to identify state-specific accomplishments and encourage or require the state under evaluation to make improvements or take corrective actions. But, according to NOAA officials, the agency does not regularly analyze findings from individual state evaluations to identify and share effective approaches across states or to identify common performance weaknesses that may warrant national focus or assistance. Our analysis of recent NOAA evaluations of the seven state programs we reviewed found that NOAA recommended the states undertake similar actions. In five of the seven state program evaluations, for example, NOAA recommended that programs undertake strategic planning, and for four of the seven programs, NOAA recommended that programs improve their coordination with local governments or other partners who help carry out coastal management activities. Yet NOAA has not analyzed these evaluations to identify common findings. One NOAA specialist we spoke with suggested that NOAA could also use the results of its state program evaluations to recognize and reward high-performing state programs. For instance, the NOAA specialist suggested that NOAA could modify its eligibility requirements for its Projects of Special Merit funding such that only high-performing programs, with any necessary actions from past state program evaluations fully implemented, would be eligible to receive funding. NOAA does not use performance-related information from other sources to support its management of the CZMP. NOAA uses state programs’ semiannual progress reports—which contain, among other things, “success stories,” or examples of a state program successfully addressing coastal management issues—to track states’ progress in implementing their cooperative agreements. However, NOAA does not use information from these reports to identify and promote effective approaches to coastal management by regularly sharing states’ success stories across states or with other stakeholders. The 2011 performance measurement system workgroup composed of NOAA and state program officials recommended that NOAA develop a website to share success stories on an annual basis. NOAA did not implement this recommendation because, according to NOAA officials, at that time it was incorporating success stories into a quarterly newsletter. According to a NOAA document, the agency produced the newsletter in response to requests from states for more information about how other state programs address coastal management issues. NOAA stopped issuing this newsletter in 2012, when its office merger began, and NOAA officials said they are now evaluating how the merged office might best share information about the CZMP across state programs and with other stakeholders. NOAA’s strategic plan for its merged coastal management office recognizes the importance of using and reporting performance information. According to this plan, NOAA is committed to maintaining a culture of monitoring and evaluation to improve the implementation of its programs. We found, however, that the strategic plan does not include a documented strategy for using the performance data NOAA collects through its CZMP performance measurement system, state program evaluations, or other sources of information, such as states’ semiannual progress reports, to manage the CZMP. NOAA officials told us that because the office merger is under way, they have not formulated a strategy for how the merged office will use performance data to inform and manage the CZMP, but they recognized the need to do so once the merger is complete. Federal control standards state the need for federal agencies to document management approaches to ensure goals and objectives can be met. Without a documented strategy for using the full range of performance information it collects, NOAA may not be taking full advantage of the performance information that its specialists, evaluators, and state program officials spend time and resources collecting, and it cannot ensure that it is realizing the full benefit of collecting such information, such as identifying common problems in state programs and taking corrective actions, setting national program priorities and developing strategies, recognizing state programs that exceed expectations, or identifying and sharing effective approaches to program implementation. Finally, NOAA has not taken steps to integrate data from its CZMP performance measurement system with information from its state program evaluations to develop a complete picture of the CZMP’s performance, as we recommended in our 2008 report. In 2008, we found that NOAA was not integrating quantitative national performance measure data with qualitative information from state program evaluations to develop a more comprehensive assessment of the CZMP’s performance. NOAA agreed with our recommendation to develop an approach for integrating the two types of information and, in response, tasked the 2011 performance measurement system workgroup with developing a method for better communicating performance measure data. The workgroup recommended a template for communicating program results that includes quantitative national performance measure data and qualitative success stories from states’ semiannual progress reports. However, NOAA has not drawn on this quantitative and qualitative information for purposes other than producing a report in 2013 summarizing performance measurement system data. Specifically, NOAA has not integrated quantitative and qualitative information to better understand program performance, improve its assessment of difficult-to-measure activities, or validate its assessments of program progress. We have previously found that agencies that used multiple sources of data to assess performance had information that covered more aspects of program performance than those that relied on a single source. We also found that agencies can improve their performance assessments by using program evaluation information to validate performance measurement system data. We continue to believe that developing an approach to combine performance information from its CZMP performance measurement system and state program evaluations could help NOAA obtain a more complete picture of CZMP performance. The CZMP plays an integral role in helping states protect, restore, and manage the development of the nation’s coastal resources and habitats. In managing the CZMP, NOAA is challenged with the task of assessing the performance of the program, composed of partnerships with 34 individual states, each with unique coastal habitats, and differing laws, organizational structures, and funding priorities. NOAA is to be commended for its progress in improving its two primary performance assessment tools—its CZMP performance measurement system and state program evaluations—since we last reviewed the agency’s performance assessment processes in 2008. We are encouraged by NOAA’s recognition of the importance of using performance information to improve the implementation of the CZMP. However, NOAA does not use or have a documented strategy for how it will use the performance information it collects from its CZMP performance measurement system, state program evaluations, or other sources of performance-related information, as appropriate, to aid its management of the CZMP. Without a documented strategy for using the range of its performance information, NOAA cannot ensure that it is collecting the most meaningful information and realizing the full benefit of the significant amount of information it and the states collect, such as identifying common problems in state programs and taking corrective actions, setting national program priorities and developing strategies, recognizing state programs that exceed expectations, or identifying and sharing effective approaches to program implementation. We also are encouraged by NOAA’s intentions to review and possibly revise the CZMP performance measurement system once its new coastal office is in place, but the agency has yet to document the approach it plans to take—including the scope and criteria it will use for this effort. In the absence of a documented approach indicating how it will review its performance measurement system, NOAA cannot ensure that its upcoming effort will take into consideration key attributes of successful performance measures, including balance and limited overlap, or result in a system that provides meaningful information that can be used by NOAA to determine how effectively the CZMP is performing relative to its goals. We are further encouraged by NOAA’s commitment to evaluate the effectiveness and efficiency of its revised state program evaluation process and to modify it, as needed, as it moves forward with its implementation. In the interim, however, NOAA’s method for selecting stakeholders to survey during state program evaluations—which relies on a single criterion and on state program officials who have a vested interest in the program—may result in the collection of incomplete or biased information that does not ensure perspectives are gathered from stakeholders representing a variety of program goals and are collected in an objective manner, potentially undermining the sufficiency and credibility of the data the produces. In the absence of additional criteria for selecting stakeholders to survey, NOAA may select a sample of stakeholders whose work with a state program does not span the act’s six focus areas or who present less-than-objective assessments of a state program. To ensure that NOAA collects and uses meaningful performance information to help manage the CZMP, including continuing to improve its CZMP performance measurement system and its state program evaluations, we are recommending that the Secretary of Commerce direct the Administrator of NOAA to take the following three actions: Develop a documented strategy to use the range of performance information the agency collects, as appropriate, to aid its management of the CZMP, such as to identify potential problems or weaknesses in the CZMP; set program priorities or strategies; or recognize and reward high-performing state programs. As part of its intended review of the CZMP performance measurement system and in consideration of how it intends to use the performance information, document the approach it plans to take to analyze and revise, as appropriate, the performance measures, and in so doing ensure the analysis considers key attributes of successful performance measures, such as balance and limited overlap. Revise the sampling methodology for selecting stakeholders to survey—included as part of its state program evaluation process—to ensure perspectives are gathered from stakeholders representing a variety of program goals and are collected in an objective manner. We provided a draft of this report to the Department of Commerce for review and comment. In written comments provided by NOAA through Commerce (reproduced in appendix IV), NOAA generally agreed with our findings and concurred with our recommendations. NOAA also provided technical comments that we incorporated, as appropriate. In its comment letter, NOAA stated that while it found GAO’s evaluation of the CZMP performance measurement system accurate, the agency did not agree with GAO’s assessment that eliminating a stand-alone category for coastal water quality could negatively affect the system’s ability to reflect the goals of the CZMA in a balanced way. NOAA stated that removal of the coastal water quality focus area did not impair its ability to track progress in meeting the water quality goal of the CZMA, explaining that it retained one measure composed of two data elements related to coastal water quality, but housed under a different focus area. We agree that the two-part measure NOAA maintained related to coastal water quality may provide important information on performance in this area. However, we continue to believe that the information it is collecting related to coastal water quality may not be balanced in comparison to the information it is collecting for the other five focus areas, which could in turn result in inconsistent performance information when looking across the six focus areas of the program. NOAA concurred with the three recommendations in the report and described actions it plans to address them. With regard to the first recommendation, NOAA stated that it plans to develop a strategy for using performance information it collects, including information from its performance measurement system, evaluations of state programs, performance reports, and other sources, and noted that it will build upon existing efforts to share lessons-learned regarding successful approaches or shared challenges across the national program. In addressing our second recommendation, on documenting its approach for analyzing and revising, as appropriate, the performance measures, NOAA stated that it plans to conduct a review of CZMP performance measures in fiscal year 2015 as part of its ongoing analysis of performance measures for programs under its new coastal office. In response to our third recommendation, NOAA stated that it will revise its sampling methodology to ensure stakeholders representing a variety of program goals are selected. We are sending copies of this report to the Secretary of Commerce, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Focusing on National Coastal Zone Management Program (CZMP) activities since our 2008 report, our objectives were to examine (1) how participating states allocated CZMP funds awarded in fiscal years 2008 through 2013 and (2) how the National Oceanic and Atmospheric Administration’s (NOAA) primary performance assessment tools have changed and the extent to which NOAA uses performance information to help manage the CZMP. To examine how participating states allocated CZMP funds awarded in fiscal years 2008 through 2013, we reviewed the Coastal Zone Management Act and related regulations and guidance, including NOAA funding guidance and allocation memos. We analyzed NOAA data on federal funds awarded by state and by funding type from fiscal years 2008 to 2013, and we compared this data against annual NOAA funding guidance and allocation memorandums to states. Based on our analysis, and interviews with NOAA officials, we found the data to be sufficiently reliable. We reviewed NOAA’s analysis of states’ allocations of CZMP funding for fiscal years 2008 through 2013, which was based on NOAA’s review of its cooperative agreements for federal funding with states. NOAA’s analysis involved the categorization of states’ funding allocations for projects into six focus areas based on the goals of the act and an additional state program management category as defined by NOAA to cover administrative costs, such as general program operations, supplies, and rent. NOAA officials noted that total funding allocation amounts are approximate and that many CZMP funded activities could address more than one focus area. For example, Maine state program officials told us their activities to conserve and enhance properties that provide commercial fishing access address both coastal community development and public access focus areas. To address this challenge, NOAA developed written guidance for NOAA specialists who conduct the analysis that specifies the types of activities to include in each focus area and the state program management category, as well as direction on how to categorize funds in cases where a project or activity may fall in more than one category. For instance, NOAA defined funds in the government coordination focus area to include, among others, activities that involved coordination with other government agencies and stakeholders, technical assistance to local governments, or public outreach and education activities only if they did not correspond to other focus areas. To determine the reliability of NOAA’s analysis, we interviewed knowledgeable NOAA officials, reviewed NOAA’s process for categorizing proposed activities and projects, including its written guidance on categorizing CZMP-funded activities and its steps to compare funding amounts to ensure that the double-counting of funds did not take place. We did not independently verify the results of NOAA’s analysis, but we verified major categories used in NOAA’s analysis for consistency across years, checked the total allocated funds in NOAA’s analysis against total federal funding award data, and reviewed NOAA’s categorization of a small sample of projects. We concluded the data to be sufficiently reliable for our purposes of reporting states’ allocated uses of CZMP funds. We also reviewed data from NOAA’s CZMP performance measurement system from 2008 through 2013 (the most recent years for which data was available) to further illustrate how CZMP funds were used. To assess the reliability of NOAA’s CZMP performance measurement system data, we interviewed NOAA officials about reliability of the data and reviewed corresponding documentation including performance measures guidance to states and internal guidance to NOAA specialists about their required reviews of data submitted. We did not independently verify performance measure data submitted by state programs, but based on our review of steps taken by NOAA to review state-submitted data, we found the data sufficiently reliable for the purposes of our report. To examine how NOAA’s primary performance assessment tools have changed since 2008, and the extent to which NOAA uses performance information to help manage the CZMP, we analyzed applicable laws and guidance including the act, and NOAA’s guidance on its CZMP performance measurement system and state program evaluations. We reviewed documentation on changes NOAA has made to these two performance tools, including steps taken to address our 2008 report recommendations, and we interviewed NOAA officials about the changes they made and their use of performance information. We reviewed GAO’s work on performance measurement to identify key attributes associated with successful performance measures and assessed NOAA’s CZMP performance measurement system against these attributes by reviewing the agency’s performance measures and guidance on the system and interviewing NOAA and state program officials. We also analyzed NOAA’s CZMP performance measurement system data from 2011, 2012, and 2013. We reviewed our and others’ work on program evaluations to identify standards for strong evaluation design and assessed NOAA’s process for evaluating state coastal programs against these standards by examining NOAA’s evaluation guidance and interviewing NOAA officials. We examined information NOAA maintains on CZMP performance including fact sheets, states’ cooperative agreements, semiannual progress reports, performance measurement system data submitted by states, and state program evaluation reports. In conducting our work on both objectives, we interviewed representatives of the Coastal States Organization, a nonprofit organization that represents coastal states on legislative and policy issues, as well as state program officials from the seven states that received the most fiscal year 2012 CZMP funding in each of NOAA’s seven regions (California, Florida, Hawaii, Maine, Michigan, Texas, and Virginia) about how states used CZMP funds and for their perspectives on NOAA’s management and assessment of the overall national program. We also reviewed the seven states’ cooperative agreements and semiannual progress reports for fiscal years 2011 and 2012 (the most recent years for which reports were available) to learn about projects undertaken by these seven states. We selected one CZMP-funded project in each of the seven states to further determine and illustrate how states used funds on a project-level basis and to learn about how the results of a select project are captured by NOAA’s performance assessment tools. In selecting projects to review, we considered the amount of CZMP funds allocated to specific projects, funding type, project type (e.g., projects that provide financial and technical assistance to local governments, planning projects, construction-related projects, permitting activities), and focus area (e.g., coastal habitat, government coordination). Our review of the states’ information cannot be generalized across all states or projects. We also interviewed coastal program officials from American Samoa and the Northern Mariana Islands to obtain perspectives from territories on NOAA’s performance assessment tools and territories’ use of this performance information. We conducted two site visits to observe and learn more about CZMP projects—one to a coastal habitat restoration project in Texas and one to an ocean planning project in Virginia. We selected these projects for site visits considering project type, focus area addressed, and geographic location. During our site visits, we met with state program officials and also interviewed stakeholders involved in the selected projects, as well as stakeholders involved in other CZMP-funded projects. In Texas, we met with the nonprofit organization managing the coastal habitat restoration project and toured the restoration site; in Virginia, we visited a public access enhancement project that received CZMP funding. We conducted this performance audit from June 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The National Oceanic and Atmospheric Administration’s (NOAA) CZMP performance measurement system is organized by broad focus areas that are related to five of the six primary focus areas based on the goals of the CZMP as outlined in the Coastal Zone Management Act. The system consists of 17 performance measures—15 of the 17 measures are organized under the five broad focus areas (NOAA removed the sixth focus area, coastal water quality, from its performance measurement system in 2011 in response to a performance measurement system workgroup’s recommendation to streamline the system), and the remaining 2 measures are to track state financial expenditures. Each of the 17 measures is composed of several individual data elements. For example, the performance measure on federal consistency is composed of two data elements that track the number of projects reviewed and the number of projects modified under states’ federal consistency review processes. In addition, some data elements are further broken down into specific categories, such as types of federal consistency projects modified. See table 5 for a list of the performance measures and supporting data elements and categories, as reported by participating state programs for 2011 through 2013. Definition Measure is aligned with division and agency-wide goals and mission and clearly communicated throughout the organization. Potentially adverse consequences of not meeting attribute Behaviors and incentives created by measures may not support achieving division or agency-wide goals or mission. Measure is clearly stated and the name and definition are consistent with the methodology used to calculate it. Data may confuse or mislead users. Measure has a numerical target. Managers may not be able to determine whether performance is meeting expectations. Measure is reasonably free from significant bias or manipulation. Performance assessments may be systematically over- or understated. Measure produces the same result under similar conditions. Reported performance data may be inconsistent and add uncertainty. Measures cover the activities that an entity is expected to perform to support the intent of the program. Information available to managers and stakeholders in core program areas may be insufficient. Measure provides new information beyond that provided by other data sources. Manager may have to sort through redundant, costly information that does not add value. Taken together, measures ensure that an organization’s various priorities are covered. Measures may over emphasize some goals and skew incentives. Each measure should cover a priority such as quality, timeliness, and cost of service. A program’s overall success is at risk if all priorities are not addressed. In addition to the individual named above, Alyssa M. Hundrup (Assistant Director), Elizabeth Beardsley, Mark A. Braza, Elizabeth Curda, John Delicath, Tom James, Katherine Killebrew, Patricia Moye, Dan Royer, Kiki Theodoropoulos, and Swati Sheladia Thomas made key contributions to this report. | The U.S. coast is home to more than half the U.S. population and integral to the nation's economy. Under the Coastal Zone Management Act, NOAA administers the CZMP, a federal-state partnership that encourages states to balance development with protection of coastal zones in exchange for federal financial assistance and other incentives. In 2008, GAO reviewed the CZMP and recommended improvements for CZMP performance assessment tools. A fiscal year 2013 appropriations committee report mandated GAO to review NOAA's implementation of the act. This report examines (1) how states allocated CZMP funds awarded in fiscal years 2008 through 2013 and (2) how NOAA's primary performance assessment tools have changed since GAO's 2008 report and the extent to which NOAA uses performance information in managing the CZMP. GAO reviewed laws, guidance, and performance-related reports; analyzed CZMP funding data for fiscal years 2008-2013; and interviewed NOAA officials and a nongeneralizeable sample of officials from seven states selected for receiving the most fiscal year 2012 funding in each of NOAA's regions. During fiscal years 2008 through 2013, the 34 states participating in the National Oceanic and Atmospheric Administration's (NOAA) National Coastal Zone Management Program (CZMP) allocated nearly $400 million in CZMP funds for a variety of activities. States allocated this funding for activities spanning six broad focus areas based on goals outlined in the Coastal Zone Management Act. For example, states allocated about a quarter of their CZMP funding to the coastal habitat focus area, according to NOAA's analysis. Coastal habitat activities encompassed a variety of actions to protect, restore, or enhance coastal habitat areas, such as habitat mapping or restoration planning efforts of marsh habitats for fish and wildlife and enhanced recreational opportunities. NOAA's two primary performance assessment tools—its CZMP performance measurement system and state program evaluations—have limitations, even with changes NOAA made since 2008, and NOAA makes limited use of the performance information it collects. Regarding the performance measurement system, NOAA has made changes such as taking steps intended to improve the reliability of data it collects. However, its current measurement system does not align with some key attributes of successful performance measures, including the following: Balance: a balanced set of measures ensures that a program's various goals are covered. NOAA removed the coastal water quality focus area, one of six focus areas based on goals in the act, to streamline the performance measurement system. As a result, the system may not provide a complete picture of states' overall performance across all focus areas based on goals in the act. Limited overlap: measures should produce new information beyond what is provided by other data sources . NOAA's system includes measures that overlap with financial data provided in cooperative agreements. By requiring states to submit financial data available through other sources, NOAA may be unnecessarily burdening states with data collection requirements. NOAA plans to review and potentially revise its measurement system, but it has not documented the approach it plans to take, including how the measures will align with key attributes of successful performance measures. Regarding state program evaluations, in 2013, NOAA revised its process to conduct evaluations more efficiently, at a reduced cost. However, GAO identified a limitation in NOAA's method for sampling stakeholders to survey under its revised process that may result in the selection of stakeholders that do not span all six focus areas based on goals of the act. Finally, NOAA makes limited use of the performance information it collects from these tools. For example, since it began collecting performance measurement data in 2008, NOAA used the data once to report on accomplishments. NOAA recognizes the importance of using performance information to improve program implementation, but it has not documented a strategy for how it will use its performance information to manage the program. As a result, NOAA may not be realizing the full benefit of collecting performance information. GAO recommends that NOAA document an approach to analyze and revise, as appropriate, its performance measures against key attributes, revise its process for selecting stakeholders to survey in its state program evaluations, and document a strategy for using the performance information it collects. NOAA concurred with the recommendations. |
The National Guard, with its dual federal and state roles, has been in demand to meet both overseas operations and homeland security requirements. Over the last decade the National Guard has experienced the largest activation of its forces since World War II. At the same time, the Guard’s domestic activities have expanded from routine duties, such as responding to hurricanes, to include activities such as helping to secure U.S. borders. Generally, the National Guard can operate in three different statuses: (1) state status—state funded under the command and control of the governor; (2) Title 32 status—federally funded under command and control of the governor (Title 32 forces may participate in law enforcement activities); and (3) Title 10 status—federally funded under command and control of the Secretary of Defense. Forces serving in Title 10 status are generally prohibited from direct participation in law enforcement activities, without proper statutory authorization, but may work to support civilian law enforcement. Although National Guard forces working in support of law enforcement at the southwest land border have been activated under Title 32, the Secretary of Defense has limited their activities with regard to law enforcement. Specifically, these National Guard forces are not to make arrests. Since 2006, the National Guard has supported DHS’s border security mission in the four southwest border states (California, Arizona, New Mexico, and Texas) through two missions: Operation Jump Start (June 2006-July 2008) involved volunteers from the border states and from outside the border states; its mission included aviation, engineering, and entry identification, among others, according to National Guard officials. Operation Phalanx (July 2010-September 30, 2011) involved volunteer units and in-state units. The Secretary of Defense limited the National Guard mission to entry identification, criminal analysis, and command and control, according to National Guard officials. In addition to the National Guard, DOD provided support at the southwest land border with active duty military forces operating in Title 10 status. While active duty forces are normally prohibited from direct participation in law enforcement, Congress has at times authorized it. For example, §1004 of the National Defense Authorization Act for Fiscal Year 1991, as amended, allows the Secretary of Defense to provide support for the counterdrug activities of any other department or agency of the federal government or of any state, local, or foreign law enforcement agency if certain criteria, set out in the statute, are met. Various factors influence the cost of a DOD role at the southwest land border, such as the scope and duration of the mission. Federal agency officials have cited a variety of benefits from having a DOD role at the southwest land border. The National Defense Authorization Act for Fiscal Year 2011 mandated that we examine the costs and benefits of an increased DOD role to help secure the southwest land border. This mandate directed that we report on a number of steps that could be taken that might improve security on the border, including the potential deployment of additional units, increased use of ground-based mobile surveillance systems, use of mobile patrols by military personnel, and an increased deployment of unmanned aerial systems and manned aircraft to provide surveillance of the southern land border of the United States. In September 2011, we reported that DOD estimated a total cost of about $1.35 billion for two separate border operations—Operation Jump Start and Operation Phalanx—conducted by the National Guard forces in Title 32 status from June 2006 to July 2008 and from June 2010 through September 30, 2011, respectively. Further, DOD estimated that it has cost about $10 million each year since 1989 to use active duty Title 10 forces nationwide, through its Joint Task Force-North, in support of drug law enforcement agencies with some additional operational costs borne by the military services. As we considered the various steps we were directed to address in our report, we found that the factors that may affect the cost of a DOD effort are largely determined by the legal status and the mission of military personnel being used, specifically whether personnel are responding under Title 32 or Title 10 (federal status) of the Unites States Code. For example, in considering the deployment of additional units, if National Guard forces were to be used in Title 32 status, then the factors that may impact the cost include whether in-state or out-of-state personnel are used, the number of personnel, duration of the mission, ratio of officers to enlisted personnel, and equipment and transportation needs. The costs of National Guard forces working at the border in Title 32 status can also be impacted by specific missions. For example, DOD officials told us that if National Guardsmen were assigned a mission to conduct mobile patrols, then they would be required to work in pairs and would only be able to perform part of the mission (i.e., to identify persons of interest). They would then have to contact the Border Patrol to make possible arrests or seizures because the Secretary of Defense has precluded National Guardsmen from making arrests or seizures during border security missions. Border Patrol agents, however, may individually conduct the full range of these activities, thus making the use of Border Patrol agents for these activities more efficient. At the time of our review, Title 10 active duty military forces were being used for missions on the border, and cost factors were limited primarily to situations whereby DOD may provide military support to law enforcement agencies for counternarcotic operations. Support can include direct funding, military personnel, and equipment. With the estimated $10 million that DOD spends each year for Title 10 active duty forces in support of drug law enforcement agencies nationwide, DOD is able— through its Joint Task Force-North—to support approximately 80 of about 400 requests per year for law enforcement assistance. These funds have been used for activities in support of law enforcement such as operations, engineering support, and mobile training teams. For example, DOD was able to provide some funding for DOD engineering units that constructed roads at the border. While DOD provided the manpower and equipment, CBP provided the materials. In addition, DOD was able to provide some funding for DOD units that provided operational support (e.g., ground based mobile surveillance unit) to law enforcement missions. We also reported on the cost factors related to deploying manned aircraft and unmanned aerial systems. DOD officials did not report any use of unmanned aerial systems for border security missions because these systems were deployed abroad. DOD officials, however, did provide us with cost factors for the Predator and Reaper unmanned aerial systems. Specifically, in fiscal year 2011, the DOD Comptroller reported that a Predator and a Reaper cost $859 and $1,456 per flight hour, respectively. DOD uses maintenance costs, asset utilization costs, and military personnel costs to calculate these figures. In addition, DOD officials identified other factors that may impact operating costs of unmanned aerial systems, including transportation for personnel and equipment, rental or lease for hanger space, and mission requirements. With regard to manned aircraft, DOD provided cost factors for a Blackhawk helicopter and a C-12 aircraft, which were comparable to the type of rotary and fixed-wing aircraft used by DHS. For example, in fiscal year 2011, DOD reported that a Blackhawk helicopter and a C-12 aircraft cost $5,897 and $1,370 per flight hour, respectively. DOD uses maintenance costs, asset utilization costs, and military personnel costs to develop their flight hour estimates. Furthermore, according to DOD officials, in fiscal year 2011, DOD contracted for a Cessna aircraft with a forward-looking infrared sensor (known as the Big Miguel Program), which costs $1.2 million per year and assisted at the southwest land border. Federal officials cited a variety of benefits from a DOD role to help secure the southwest land border. For example, DOD assistance has (1) provided a bridge or augmentation until newly hired Border Patrol agents are trained and deployed to the border; (2) provided training opportunities for military personnel in a geographic environment similar to combat theaters abroad; (3) contributed to apprehensions and seizures made by Border Patrol along the border; (4) deterred illegal activity at the border; (5) built relationships with law enforcement agencies; and (6) maintained and strengthened military-to-military relationships with forces from Mexico. Specifically with regard to Operation Jump Start (June 2006-July 2008), CBP officials reported that the National Guard assisted in the apprehension of 186,814 undocumented aliens, and in the seizure of 316,364 pounds of marijuana, among other categories of assistance, including rescues of persons in distress and the seizure of illicit currency. Based on these reported figures, the National Guard assisted in 11.7 percent of all undocumented alien apprehensions and 9.4 percent of all marijuana seized on the southwest land border. During the National Guard’s Operation Phalanx (July 2010-June 30, 2011), CBP reported that as of May 31, 2011, the National Guard assisted in the apprehension of 17,887 undocumented aliens and the seizure of 56,342 pounds of marijuana. Based on these reported figures, the National Guard assisted in 5.9 percent of all undocumented alien apprehensions and 2.6 percent of all marijuana seized on the southwest land border. In fiscal year 2010, active duty military forces (Title 10), through Joint Task Force-North, conducted 79 missions with 842 DOD personnel in support of law enforcement and assisted in the seizure of about 17,935 pounds of marijuana, assisted in the apprehension of 3,865 undocumented aliens, and constructed 17.26 miles of road, according to DOD officials. With regard to unmanned aerial systems at the time of our report, DOD had fewer systems available, since they were deployed to missions abroad, including operations in Afghanistan, Iraq, and elsewhere. Moreover, DOD’s access to the national airspace is constrained given the safety concerns about unmanned aerial systems raised by the Federal Aviation Administration, specifically the ability of the unmanned aerial system to detect, sense, and avoid an aircraft in flight. We also reported that, conversely, pilots of manned aircraft have the ability to see and avoid other aircraft, and thus may have more routine access to the national airspace. Further, DOD reports that manned aircraft are effective in the apprehension of undocumented aliens. For example, during fiscal year 2011, DOD leased a manned Cessna aircraft (the Big Miguel Program) that was used to assist in the apprehension of at least 6,500 undocumented aliens and the seizure of $54 million in marijuana, as reported to DOD by DHS. A number of challenges exist for both the National Guard and for active- duty military forces in providing support to law enforcement missions on the southwest land border. National Guard personnel involved in activities on the border have been under the command and control of the governors of the southwest border states and have received federal funding in Title 32 status. In this status, National Guard personnel are permitted to participate in law enforcement activities; however, the Secretary of Defense has limited their activities, which has resulted in the inability of the National Guard units to make arrests while performing border security missions. The National Guard mission limitations are based in part on concerns raised by both DOD and National Guard officials that civilians may not distinguish between Guardsmen and active duty military personnel in uniform, which may lead to the perception that the border is militarized. Therefore, all arrests and seizures at the southwest land border are performed by the Border Patrol. Additionally, we found that the temporary use of the National Guard to help secure the border may give rise to additional challenges. For example, we reported that the use of out-of-state Guardsmen for long- term missions in an involuntary status may have an adverse effect on future National Guard recruitment and retention, according to National Guard officials. Finally, CBP officials noted that the temporary nature of National Guard duty at the border could impact long-term border security planning. These impacts are due to difficulties of incorporating the National Guard into a strategic border security plan, given the variety and number of missions that the National Guard is responsible for, including disaster assistance. In meeting with DOD officials, we heard of multiple challenges to providing support to law enforcement missions. Specifically, there are legal restraints and other challenges that active duty forces must be mindful of when providing assistance to civilian law enforcement. For example, the 1878 Posse Comitatus Act, 18 U.S.C. §1385, prohibits the direct use of Title 10 (federal) forces in domestic civilian law enforcement, except where authorized by the Constitution or an act of Congress. However, Congress has authorized military support to law enforcement agencies in specific situations such as support for the counterdrug activities of other agencies. DOD further clarifies restrictions on direct assistance to law enforcement with its guidance setting out the approval process for Title 10 forces providing operational support for counternarcotic law enforcement missions. meet a number of criteria, including that the mission must: The request of law enforcement agencies for support must Have a valid counterdrug nexus. Have a proper request from law enforcement (the request must come Provide a training opportunity to increase combat readiness. from an appropriate official, be limited to unique military capabilities, and provide a benefit to DOD or be essential to national security goals). Improve unit readiness or mission capability. Deputy Secretary of Defense Memorandum, Department Support to Domestic Law Enforcement Agencies Performing Counternarcotic Activities (October 2, 2003). Avoid the use of Title 10 forces (military services) for continuing, ongoing, long-term operation support commitments at the same location. Given the complexity of legal authorities and policy issues related to DOD providing support to law enforcement and the number of DOD entities that must approve a support mission by Title 10 forces, it can take up to 180 days to obtain final approval from the Office of the Secretary of Defense to execute a mission in support of law enforcement. While supporting law enforcement, DOD may be subject to certain limitations. For example, one limitation is that DOD units working on border missions cannot carry loaded weapons. Instead, DOD units working on the border rely on armed Border Patrol agents, who are assigned to each military unit to provide protection. In addition, we reported in September 2011 that DOD’s operational tempo may impact the availability of DOD units to fill law enforcement support missions. While some DOD units are regularly available to meet specific mission needs at the border (e.g., mechanized units to construct roads), other DOD units (e.g., ground-based surveillance teams) are deployed or may be deployed abroad making it more difficult to fulfill law enforcement requests at any given time. Further, DOD officials we spoke with also raised information-sharing challenges when providing support to law enforcement missions. For example, DOD officials commented that because there are different types of law enforcement personnel that use information differently (e.g., make an immediate arrest or watch, wait, and grow an investigation leading to a later arrest), it was sometimes difficult for DOD to understand whether information sharing was a priority among law enforcement personnel. DOD officials also noted that a lack of security clearances for law enforcement officials affects DOD’s ability to provide classified information to CBP. During our examination of an increased role for DOD at the southwest land border, agency officials we spoke with raised a number of broader issues and concerns surrounding any future expansion of such assistance. Agency officials identified four areas of concern: DOD officials expressed concerns about the absence of a comprehensive strategy for southwest border security and the resulting challenges to identify and plan a DOD role. DHS officials expressed concerns that DOD’s border assistance is ad hoc in that DOD has other operational requirements. DOD assists when legal authorities allow and resources are available, whereas DHS has a continuous mission to ensure border security. Department of State and DOD officials expressed concerns that greater or extended use of military forces on the border could create a perception of a militarized U.S. border with Mexico, especially when Department of State and Justice officials are helping support civilian law enforcement institutions in Mexico to address crime and border issues. Federal Aviation Administration officials, who are part of the Department of Transportation, stated that they are concerned about safety in the national airspace, due to concerns about the ability of unmanned aerial systems to detect, sense, and avoid an aircraft in flight. The Federal Aviation Administration has granted DHS authority to fly unmanned aerial systems to support its national security mission along the U.S. southwest land border, and is working with DOD, DHS, and the National Aeronautics and Space Administration to identify and evaluate options to increase unmanned aerial systems access in the national airspace. We did not make any recommendations in our September 2011 report. Chairman Miller, Ranking Member Cuellar, and Members of the Subcommittee, this concludes my prepared statement. I am pleased to answer any questions that you may have at this time. For future questions about this statement, please contact me on (202) 512-4523 or [email protected]. Individuals making key contributions to this statement include Mark Pross, Assistant Director; Yecenia Camarillo; Carolynn Cavanaugh; Nicole Willems; Lori Kmetz; Charles Perdue; Richard Powelson; Terry Richardson; and Jason Wildhagen. Border Security: Additional Steps Needed to Ensure That Officers Are Fully Trained. GAO-12-269. Washington, D.C.: December 22, 2011. U.S. Customs and Boarder Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2011 Expenditure Plan. GAO-12-106R. Washington, D.C.: November 17, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. Observations on the Costs and Benefits of an Increased Department of Defense Role in Helping to Secure the Southwest Land Border. GAO-11-856R. Washington, D.C.: September 12, 2011. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies. GAO-11-829T. Washington, D.C.: July 15, 2011. Secure Border Initiative: Controls over Contractor Payments for the Technology Component Need Improvement. GAO-11-68. Washington, D.C.: May 25, 2011. Southwest Border: Border Patrol Operations on Federal Lands. GAO-11-573T. Washington, D.C.: April 15, 2011. Border Security: DHS Progress and Challenges in Securing the U.S. Southwest and Northern Borders. GAO-11-508T. Washington, D.C.: March 30, 2011. Border Security: Preliminary Observations on the Status of Key Southwest Border Technology Programs. GAO-11-448T. Washington, D.C.: March 15, 2011. Moving Illegal Proceeds: Opportunities Exist for Strengthening the Federal Government’s Efforts to Stem Cross-Border Currency Smuggling. GAO-11-407T. Washington, D.C.: March 9, 2011. Border Security: Preliminary Observations on Border Control Measures for the Southwest Border. GAO-11-374T. Washington, D.C.: February 15, 2011. Border Security: Enhanced DHS Oversight and Assessment of Interagency Coordination Is Needed for the Northern Border. GAO-11-97. Washington, D.C.: December 17, 2010. Border Security: Additional Actions Needed to Better Ensure a Coordinated Federal Response to Illegal Activity on Federal Lands. GAO-11-177. Washington, D.C.: November 18, 2010. Moving Illegal Proceeds: Challenges Exist in the Federal Government’s Effort to Stem Cross-Border Currency Smuggling. GAO-11-73. Washington, D.C.: October 25, 2010. Secure Border Initiative: DHS Needs to Strengthen Management and Oversight of Its Prime Contractor. GAO-11-6. Washington, D.C.: October 18, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | DHS reports that the southwest border continues to be vulnerable to cross-border illegal activity, including the smuggling of humans and illegal narcotics. Several federal agencies are involved in border security efforts, including DHS, DOD, Justice, and State. In recent years, the National Guard has played a role in helping to secure the southwest land border by providing the Border Patrol with information on the identification of individuals attempting to cross the southwest land border into the United States. Generally, the National Guard can operate in three different statuses: (1) state statusstate funded under the command and control of the governor; (2) Title 32 statusfederally funded under command and control of the governor; and (3) Title 10 statusfederally funded under command and control of the Secretary of Defense. This testimony discusses (1) the costs and benefits of a DOD role to help secure the southwest land border, including the deployment of the National Guard, other DOD personnel, or additional units; (2) the challenges of a DOD role at the southwest land border; and (3) considerations of an increased DOD role to help secure the southwest land border. The information in this testimony is based on work completed in September 2011, which focused on the costs and benefits of an increased role of DOD at the southwest land border. See "Observations on the Costs and Benefits of an Increased Department of Defense Role in Helping to Secure the Southwest Land Border," GAO-11-856R (Washington, D.C.: Sept. 12, 2011). The National Defense Authorization Act for Fiscal Year 2011 mandated that GAO examine the costs and benefits of an increased Department of Defense (DOD) role to help secure the southwest land border. This mandate directed that GAO report on, among other things, the potential deployment of additional units, increased use of ground-based mobile surveillance systems, use of mobile patrols by military personnel, and an increased deployment of unmanned aerial systems and manned aircraft in national airspace. In September 2011, GAO reported that DOD estimated a total cost of about $1.35 billion for two separate border operationsOperation Jump Start and Operation Phalanxconducted by National Guard forces in Title 32 status from June 2006 to July 2008 and from June 2010 through September 30, 2011, respectively. Further, DOD estimated that it has cost about $10 million each year since 1989 to use active duty Title 10 forces nationwide, through its Joint Task Force-North, in support of drug law enforcement agencies with some additional operational costs borne by the military services. Agency officials stated multiple benefits from DODs increased border role, such as assistance to the Department of Homeland Security (DHS) Border Patrol until newly hired Border Patrol agents are trained and deployed to the border; providing DOD personnel with training opportunities in a geographic environment similar to current combat theaters; contributing to apprehensions and seizures and deterring other illegal activity along the border; building relationships with law enforcement agencies; and strengthening military-to-military relationships with forces from Mexico. GAO found challenges for the National Guard and for active-duty military forces in providing support to law enforcement missions. For example, under Title 32 of the United States Code, National Guard personnel are permitted to participate in law enforcement activities; however, the Secretary of Defense has precluded National Guard forces from making arrests while performing border missions because of concerns raised about militarizing the U.S. border. As a result, all arrests and seizures at the southwest border are performed by the Border Patrol. Further, DOD officials cited restraints on the direct use of active duty forces, operating under Title 10 of the United States Code in domestic civilian law enforcement, set out in the Posse Comitatus Act of 1878. In addition, GAO has reported on the varied availability of DOD units to support law enforcement missions, such as some units being regularly available while other units (e.g., ground-based surveillance teams) may be deployed abroadmaking it more difficult to fulfill law enforcement requests. Federal officials stated a number of broad issues and concerns regarding any additional DOD assistance in securing the southwest border. DOD officials expressed concerns about the absence of a comprehensive strategy for southwest border security and the resulting challenges to identify and plan a DOD role. DHS officials expressed concerns that DODs border assistance is ad hoc in that DOD has other operational requirements. DOD assists when legal authorities allow and resources are available, whereas DHS has a continuous mission to ensure border security. Further, Department of State and DOD officials expressed concerns about the perception of a militarized U.S. border with Mexico, especially when Department of State and Justice officials are helping civilian law enforcement institutions in Mexico on border issues. |
Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Therefore, it is important for agencies to safeguard their systems against risks such as loss or theft of resources (such as federal payments and collections), modification or destruction of data, and unauthorized uses of computer resources or to launch attacks on other computer systems. Sensitive information, such as taxpayer data, Social Security records, medical records, and proprietary business information could be inappropriately disclosed, browsed, or copied for improper or criminal purposes. Critical operations could be disrupted, such as those supporting national defense and emergency services or agencies’ missions could be undermined by embarrassing incidents, resulting in diminished confidence in their ability to conduct operations and fulfill their responsibilities. Cyber threats to federal systems and critical infrastructures can be unintentional and intentional, targeted or nontargeted, and can come from a variety of sources. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. Intentional threats include both targeted and nontargeted attacks. A targeted attack is when a group or individual specifically attacks a critical infrastructure system. A nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or malware is released on the Internet with no specific target. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical information systems, including foreign nation states engaged in information warfare, domestic criminals, hackers, virus writers, and disgruntled employees working within an organization. Table 1 summarizes those groups or individuals that are considered to be key sources of cyber threats to our nation’s information systems and infrastructures. As federal information systems increase their connectivity with other networks and the Internet and as the system capabilities continue to increase, federal systems will become increasingly more vulnerable. Data from the National Vulnerability Database, the U.S. government repository of standards-based vulnerability management data, showed that, as of February 6, 2008, there were about 29,000 security vulnerabilities or software defects that can be directly used by a hacker to gain access to a system or network. On average, close to 17 new vulnerabilities are added each day. Furthermore, the database revealed that more than 13,000 products contained security vulnerabilities. These vulnerabilities become particularly significant when considering the ease of obtaining and using hacking tools, the steady advances in the sophistication and effectiveness of attack technology, and the emergence of new and more destructive attacks. Thus, protecting federal computer systems and the systems that support critical infrastructures has never been more important. Over five years have passed since Congress enacted FISMA, which sets forth a comprehensive framework for ensuring the effectiveness of security controls over information resources that support federal operations and assets. FISMA’s framework creates a cycle of risk management activities necessary for an effective security program, and these activities are similar to the principles noted in our study of the risk management activities of leading private sector organizations—assessing risk, establishing a central management focal point, implementing appropriate policies and procedures, promoting awareness, and monitoring and evaluating policy and control effectiveness. More specifically, FISMA requires the head of each agency to provide information security protections commensurate with the risk and magnitude of harm resulting from the unauthorized access, use, disclosure, disruption, modification or destruction of information and information systems used or operated by the agency or on behalf of the agency. In this regard, FISMA requires that agencies implement information security programs that, among other things, include ● periodic assessments of the risk; ● risk-based policies and procedures; ● subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; ● security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; ● periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually; ● a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies; ● procedures for detecting, reporting, and responding to security ● plans and procedures to ensure continuity of operations. In addition, agencies must develop and maintain an inventory of major information systems that is updated at least annually and report annually to the Director of OMB and several Congressional Committees on the adequacy and effectiveness of their information security policies, procedures, and practices and compliance with the requirements of the act. OMB and agency IGs also play key roles under FISMA. Among other responsibilities, OMB is to develop policies, principles, standards, and guidelines on information security and is required to report annually to Congress on agency compliance with the requirements of the act. OMB has provided instructions to federal agencies and their IGs for annual FISMA reporting. OMB’s reporting instructions focus on performance metrics related to the performance of key control activities such as certifying and accrediting systems, testing and evaluating security controls, and providing security training to personnel. Its yearly guidance also requires agencies to identify any physical or electronic incidents involving the loss of, or unauthorized access to, personally identifiable information. FISMA also requires agency IGs to perform an independent evaluation of the information security programs and practices of the agency to determine the effectiveness of such programs and practices. Each evaluation is to include (1) testing of the effectiveness of information security policies, procedures, and practices of a representative subset of the agency’s information systems and (2) assessing compliance (based on the results of the testing) with FISMA requirements and related information security policies, procedures, standards, and guidelines. These required evaluations are then submitted by each agency to OMB in the form of an OMB-developed template that summarizes the results. In addition to the template submission, OMB encourages agency IGs to provide any additional narrative in an appendix to the report to the extent they provide meaningful insight into the status of the agency’s security or privacy program. Federal agencies continue to report progress in implementing key information security activities. The President’s proposed fiscal year 2009 budget for IT states that the federal government continues to improve information security performance relative to the certification and accreditation of systems and the testing of security controls and contingency plans. According to the budget, in 2007 the percentage of certified and accredited systems rose from 88 percent to 92 percent. Even greater gains were reported in testing of security controls—from 88 percent of systems to 95 percent of systems— and for contingency plan testing—from 77 percent to 86 percent. The proposed budget also noted improvements related to agency IG qualitative assessments of certain IT security processes. It reported that the overall quality of the certification and accreditation processes as determined by agency IGs increased compared to 2006, with 76 percent of agencies reporting ‘‘satisfactory’’ or better processes, up from 60 percent the prior year. In addition, the budget noted that 76 percent of agencies demonstrated that they had an effective process in place for identifying and correcting weaknesses using plans of action and milestone management processes. Although we have not yet verified the information security performance information for fiscal year 2007 contained in the President’s proposed budget, the information is consistent with historical trends. As we reported last year, agencies reported increased percentages in most OMB performance metrics for fiscal year 2006 when compared to fiscal year 2005 (see fig. 1) including those related to: Percentage of employees and contractors receiving IT Percentage of employees with significant security responsibilities who received specialized security training, Percentage of systems whose controls were tested and evaluated, Percentage of systems with tested contingency plans, Percentage of 24 major agencies with 96-100 percent complete inventories of major information systems, and Percentage of systems certified and accredited. However, for the fiscal year 2006 reporting period, IGs identified weaknesses with their agencies’ implementations of those key control activities. For example, according to agency IGs, five major agencies reported challenges in ensuring that contractors had received security awareness training. In addition, they reported that not all systems had been tested and evaluated at least annually, including some high impact systems, and that weaknesses existed in agencies’ monitoring of contractor systems or facilities. They highlighted other weaknesses such as contingency plans not being completed for critical systems and inventories of systems that were incomplete. Furthermore, IGs reported weaknesses in agencies’ certification and accreditation processes, a key activity OMB uses to monitor agencies’ implementation of information security requirements. Our work and that of IGs show that significant weaknesses continue to threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of federal agencies. In their fiscal year 2007 performance and accountability reports, 20 of 24 major agencies indicated that inadequate information security controls were either a significant deficiency or a material weakness (see fig. 2). Our audits continue to identify similar conditions in both financial and non-financial systems, including agencywide weaknesses as well as weaknesses in critical federal systems. Persistent weaknesses appear in five major categories of information system controls: (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) configuration management controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations; and (5) an agencywide information security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. Figure 3 shows the number of major agencies with weaknesses in these five areas. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion of the data. Access controls, which are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities, can be both electronic and physical. Electronic access controls include use of passwords, access privileges, encryption, and audit logs. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. Most agencies did not implement controls to sufficiently prevent, limit, or detect access to computer networks, systems, or information. Our analysis of IG, agency, and our own reports uncovered that agencies did not have adequate controls in place to ensure that only authorized individuals could access or manipulate data on their systems and networks. To illustrate, 19 of 24 major agencies reported weaknesses in such controls. For example, agencies did not consistently (1) identify and authenticate users to prevent unauthorized access, (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate, (3) establish sufficient boundary protection mechanisms, (4) apply encryption to protect sensitive data on networks and portable devices, and (5) log, audit, and monitor security-relevant events. Agencies also lacked effective controls to restrict physical access to information assets. We previously reported that many of the data losses occurring at federal agencies over the past few years were a result of physical thefts or improper safeguarding of systems, including laptops and other portable devices. In addition to access controls, other important controls should be in place to protect the confidentiality, integrity, and availability of information. These controls include the policies, procedures, and techniques for ensuring that computer hardware and software are configured in accordance with agency policies and that software patches are installed in a timely manner; appropriately segregating incompatible duties; and establishing plans and procedures to ensure continuity of operations for systems that support the operations and assets of the agency. However, agencies did not always configure network devices and services to prevent unauthorized access and ensure system integrity, patch key servers and workstations in a timely manner, or segregate incompatible duties to different individuals or groups so that one individual does not control all aspects of a process or transaction. Furthermore, agencies did not always ensure that continuity of operations plans contained all essential information. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of information. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented all the FISMA-required elements for an agencywide information security program. An agencywide security program, required by FISMA, provides a framework and continuing cycle of activity for assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. Our analysis determined that 19 of 24 major federal agencies had not fully implemented agencywide information security programs. Our recent reports illustrate that agencies often did not adequately design or effectively implement policies for elements key to an information security program. We identified weaknesses in information security program activities, such as agencies’ risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. For example, ● One agency’s risk assessment was completed without the benefit of an inventory of all the interconnections between it and other systems. In another case, an agency had assessed and categorized system risk levels and conducted risk assessments, but did not identify many of the vulnerabilities we found and had not subsequently assessed the risks associated with them. ● Agencies had developed and documented information security policies, standards, and guidelines for information security, but did not always provide specific guidance for securing critical systems or implement guidance concerning systems that processed Privacy Act- protected data. ● Security plans were not always up-to-date or complete. ● Agencies did not ensure all information security employees and contractors, including those who have significant information security responsibilities, received sufficient training. ● Agencies had tested and evaluated information security controls, but their testing was not always comprehensive and did not identify many of the vulnerabilities we identified. ● Agencies did not consistently document weaknesses or resources in remedial action plans. As a result, agencies do not have reasonable assurance that controls are implemented correctly, operating as intended, or producing the desired outcome with respect to meeting the security requirements of the agency, and responsibilities may be unclear, misunderstood, and improperly implemented. Furthermore, agencies may not be fully aware of the security control weaknesses in their systems, thereby leaving their information and systems vulnerable to attack or compromise. Consequently, federal systems and information are at increased risk of unauthorized access to and disclosure, modification, or destruction of sensitive information, as well as inadvertent or deliberate disruption of system operations and services. In prior reports, we and the IGs have made hundreds of recommendations to agencies to address specific information security control weaknesses and program shortfalls. Until agencies effectively and fully implement agencywide information security programs, including addressing the hundreds of recommendations that we and IGs have made, federal information and information systems will not be adequately safeguarded to prevent their disruption, unauthorized use, disclosure, or modification. The need for effective information security policies and practices is further illustrated by the number of security incidents experienced by federal agencies that put sensitive information at risk. Personally identifiable information about millions of Americans has been lost, stolen, or improperly disclosed, thereby potentially exposing those individuals to loss of privacy, identity theft, and financial crimes. Reported attacks and unintentional incidents involving critical infrastructure systems demonstrate that a serious attack could be devastating. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. These incidents illustrate that a broad array of federal information and critical infrastructures are at risk. ● The Department of Veterans Affairs (VA) announced that computer equipment containing personally identifiable information on approximately 26.5 million veterans and active duty members of the military was stolen from the home of a VA employee. Until the equipment was recovered, veterans did not know whether their information was likely to be misused. VA sent notices to the affected individuals that explained the breach and offered advice concerning steps to reduce the risk of identity theft. The equipment was eventually recovered, and forensic analysts concluded that it was unlikely that the personal information contained therein was compromised. ● The Transportation Security Administration (TSA) announced a data security incident involving approximately 100,000 archived employment records of individuals employed by the agency from January 2002 until August 2005. An external hard drive containing personnel data, such as Social Security number, date of birth, payroll information, and bank account and routing information, was discovered missing from a controlled area at the TSA Headquarters Office of Human Capital. ● A contractor for the Centers for Medicare and Medicaid Services reported the theft of one of its employee’s laptop computer from his office. The computer contained personal information including names, telephone numbers, medical record numbers, and dates of birth of 49,572 Medicare beneficiaries. ● The Census Bureau reported 672 missing laptops, of which 246 contained some degree of personal data. Of the missing laptops containing personal information, almost half (104) were stolen, often from employees’ vehicles, and another 113 were not returned by former employees. The Commerce Department reported that employees had not been held accountable for not returning their laptops. ● The Department of State experienced a breach on its unclassified network, which daily processes about 750,000 e-mails and instant messages from more than 40,000 employees and contractors at 100 domestic and 260 overseas locations. The breach involved an e-mail containing what was thought to be an innocuous attachment. However, the e-mail contained code to exploit vulnerabilities in a well-known application for which no security patch existed. Because the vendor was unable to expedite testing and deploy a new patch, the department developed its own temporary fix to protect systems from being further exploited. In addition, the department sanitized the infected computers and servers, rebuilt them, changed all passwords, installed critical patches, and updated their anti-virus software. ● In August 2006, two circulation pumps at Unit 3 of the Tennessee Valley Authority’s Browns Ferry nuclear power plant failed, forcing the unit to be shut down manually. The failure of the pumps was traced to excessive traffic on the control system network, possibly caused by the failure of another control system device. ● Officials at the Department of Commerce’s Bureau of Industry and Security discovered a security breach in July 2006. In investigating this incident, officials were able to review firewall logs for an 8- month period prior to the initial detection of the incident, but were unable to clearly define the amount of time that perpetrators were inside its computers, or find any evidence to show that data was lost as a result. ● The Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm known as “Slammer” infected a private computer network at the idled Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours. In addition, the plant’s process computer failed, and it took about 6 hours for it to become available again. When incidents such as these occur, agencies are to notify the federal information security incident center—US-CERT. As shown in figure 4, the number of incidents reported by federal agencies to US-CERT has increased dramatically over the past 3 years, increasing from 3,634 incidents reported in fiscal year 2005 to 13,029 incidents in fiscal year 2007, (about a 259 percent increase). Incidents are categorized by US-CERT in the following manner: ● Unauthorized access: In this category, an individual gains logical or physical access without permission to a federal agency’s network, system, application, data, or other resource. ● Denial of service: An attack that successfully prevents or impairs the normal authorized functionality of networks, systems, or applications by exhausting resources. This activity includes being the victim or participating in a denial of service attack. ● Malicious code: Successful installation of malicious software (e.g., virus, worm, Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are not required to report malicious logic that has been successfully quarantined by antivirus software. ● Improper usage: A person violates acceptable computing use policies. ● Scans/probes/attempted access: This category includes any activity that seeks to access or identify a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. This activity does not directly result in a compromise or denial of service. ● Investigation: Unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review. As noted in figure 5, the three most prevalent types of incidents reported to US-CERT in fiscal year 2007 were unauthorized access, improper usage, and investigation. In prior reports, GAO and IGs have made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. For example, we recommended agencies correct specific information security deficiencies related to user identification and authentication, authorization, boundary protections, cryptography, audit and monitoring and physical security. We have also recommended that agencies fully implement comprehensive, agencywide information security programs by correcting weaknesses in risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. The effective implementation of these recommendations will strengthen the security posture at these agencies. In addition, recognizing the need for common solutions to improving security, OMB and certain federal agencies have continued or launched several government wide initiatives that are intended to enhance information security at federal agencies. These key initiatives are discussed below. ● The Information Systems Security Line of Business: The goal of this initiative is to improve the level of information systems security across government agencies and reduce costs by sharing common processes and functions for managing information systems security. Several agencies have been designated as service providers for IT security awareness training and FISMA reporting. ● Federal Desktop Core Configuration: This initiative directs agencies that have Windows XP deployed and plan to upgrade to Windows Vista operating systems to adopt the security configurations develop by NIST, DOD, and DHS. The goal of this initiative is to improve information security and reduce overall IT operating costs. ● SmartBUY: This program, led by GSA, is to support enterprise-level software management through the aggregate buying of commercial software governmentwide in an effort to achieve cost savings through volume discounts. The SmartBUY initiative was expanded to include commercial off-the-shelf encryption software and to permit all federal agencies to participate in the program. The initiative is to also include licenses for information assurance. ● Trusted Internet Connections initiative: This is an effort designed to optimize individual agency network services into a common solution for the federal government. The initiative is to facilitate the reduction of external connections, including Internet points of presence, to a target of fifty. In addition to these initiatives, OMB has issued several policy memorandums over the past two years to help agencies protect sensitive data. For example, it has sent memorandums to agencies to reemphasize their responsibilities under law and policy to (1) appropriately safeguard sensitive and personally identifiable information, (2) train employees on their responsibilities to protect sensitive information, and (3) report security incidents. In May 2007, OMB issued additional detailed guidelines to agencies on safeguarding against and responding to the breach of personally identifiable information, including developing and implementing a risk-based breach notification policy, reviewing and reducing current holdings of personal information, protecting federal information accessed remotely, and developing and implementing a policy outlining the rules of behavior, as well as identifying consequences and potential corrective actions for failure to follow these rules. Opportunities also exist to enhance policies and practices related to security control testing and evaluation, FISMA reporting, and the independent annual evaluations of agency information security programs required by FISMA. ● Clarify requirements for testing and evaluating security controls. Periodic testing and evaluation of information security controls is a critical element for ensuring that controls are properly designed, operating effectively, and achieving control objectives. FISMA requires that agency information security programs include the testing and evaluation of the effectiveness of information security policies, procedures, and practices, and that such tests be performed with a frequency depending on risk, but no less than annually. We previously reported that federal agencies had not adequately designed and effectively implemented policies for periodically testing and evaluating information security controls. Agency policies often did not include important elements for performing effective testing such as how to determine the frequency, depth, and breadth of testing according to risk. In addition, the methods and practices for at six test case agencies were not adequate to ensure that assessments were consistent, of similar quality, or repeatable. For example, these agencies did not define the assessment methods to be used when evaluating security controls, did not test controls as prescribed, and did not include previously reported remedial actions or weaknesses in their test plans to ensure that they had been addressed. In addition, our audits of information security controls often identify weaknesses that agency or contractor personnel who tested the controls of the same systems did not identify. Clarifying or strengthening federal policies and requirements for determining the frequency, depth, and breadth of security controls according to risk could help agencies better assess the effectiveness of the controls protecting the information and systems supporting their programs, operations, and assets. ● Enhance FISMA reporting requirements. Periodic reporting of performance measures for FISMA requirements and related analyses provides valuable information on the status and progress of agency efforts to implement effective security management programs. In previous reports, we have recommended that OMB improve FISMA reporting by clarifying reporting instructions and requesting IGs to report on the quality of additional performance metrics. OMB has taken steps to enhance its reporting instructions. For example, OMB added questions regarding incident detection and assessments of system inventory. However, the current metrics do not measure how effectively agencies are performing various activities. Current performance measures offer limited assurance of the quality of agency processes that implement key security policies, controls, and practices. For example, agencies are required to test and evaluate the effectiveness of the controls over their systems at least once a year and to report on the number of systems undergoing such tests. However, there is no measure of the quality of agencies’ test and evaluation processes. Similarly, OMB’s reporting instructions do not address the quality of other activities such as risk categorization, security awareness training, intrusion detection and prevention, or incident reporting. OMB has recognized the need for assurance of quality for agency processes. For example, it specifically requested that the IGs evaluate the certification and accreditation process. The qualitative assessments of the process allows the IG to rate its agency’s certification and accreditation process using the terms “excellent,” “good,” “satisfactory,” “poor,” or “failing.” Providing information on the quality of the processes used to implement key control activities would further enhance the usefulness of the annually reported data for management and oversight purposes. We also previously reported that OMB’s reporting guidance and performance measures did not include complete reporting on certain key FISMA-related activities. For example, FISMA requires each agency to include policies and procedures in its security program that ensure compliance with minimally acceptable system configuration requirements, as determined by the agency. In our report on patch management, we stated that maintaining up-to-date patches is key to complying with this requirement. As such, we recommended that OMB address patch management in its FISMA reporting instructions. Although OMB addressed patch management in its 2004 FISMA reporting instructions, it no longer requests this information. As a result, OMB and the Congress lack information that could identify governmentwide issues regarding patch management. This information could prove useful in demonstrating whether or not agencies are taking appropriate steps for protecting their systems. ● Consider conducting FISMA-mandated annual independent evaluations in accordance with audit standards or a common approach and framework. We previously reported that the annual IG FISMA evaluations lacked a common approach and that the scope and methodology of the evaluations varied across agencies. For example: ● IGs stated that they were unable to conduct evaluations of their respective agency’s inventory because the information provided to them by the agency at that time was insufficient (i.e. incomplete or unavailable). ● IGs reported interviewing officials and reviewing agency documentation, while others indicated conducting tests of implementation plans (e.g. security plans). ● IGs indicated in the scope and methodology sections of their reports that their reviews were focused on selected components, whereas others did not make any reference to the breadth of their review. ● Reports were solely comprised of a summary of relevant information security audits conducted during the fiscal year, while others included additional evaluation that addressed specific FISMA-required elements, such as risk assessments and remedial actions. ● The percentage of systems reviewed was varied. Twenty-two of 24 IGs tested the information security program effectiveness on a subset of systems; two IGs did not review any systems. ● One IG noted that the agency’s inventory was missing certain web applications and concluded that the agency’s inventory was only 0-50 percent complete, although the report also noted that, due to time constraints, the IG had been unable to determine whether other items were missing. ● Two IGs indicated basing a portion of their template submission solely on information provided to them by the agency, without conducting further investigation. As we previously reported, the lack of a common methodology, or framework, had culminated in disparities in audit scope, methodology, and content of the IGs’ annual independent evaluations. As a result, the collective IG community may be performing their evaluations without optimal effectiveness and efficiency. Conducting the evaluations in accordance with generally accepted government auditing standards and/or a commonly used framework or methodology could provide improved effectiveness, increased efficiency, quality control, and consistency in assessing whether the agency has an effective information security program. IGs may be able to use the framework to be more efficient by focusing evaluative procedures on areas of higher risk and by following an integrated approach designed to gather evidence efficiently. Having a documented methodology may also offer quality control by providing a standardized methodology, which can help the IG community obtain consistency of application. In summary, agencies have reported progress in implementing control activities, but persistent weaknesses in agency information security controls threaten the confidentiality, integrity, and availability of federal information and information systems, as illustrated by the increasing number of reported security incidents. Opportunities exist to improve information security at federal agencies. OMB and certain federal agencies have initiated efforts that are intended to strengthen the protection of federal information and information systems. Opportunities also exist to enhance policies and practices related to security control testing and evaluation and FISMA reporting. Similarly, a consideration for strengthening the statutory requirement for the independent annual evaluations of agency information security programs required by FISMA could include requiring IGs to conduct the evaluation in accordance with generally accepted government auditing standards. Until such opportunities are seized and fully exploited and the hundreds of GAO and IG recommendations to mitigate information security control deficiencies and implement agencywide information security programs are fully and effectively implemented, federal information and systems will remain at undue and unnecessary risk. Mr. Chairmen and Members of the Subcommittees, this concludes my statement. I would be happy to answer questions at this time. If you have any questions regarding this report, please contact Gregory C. Wilshusen, Director, Information Security Issues, at (202) 512-6244 or [email protected]. Other key contributors to this report include Nancy DeFranceso (Assistant Director), Larry Crosland, Neil Doherty, Nancy Glover, Rebecca LaPaze, Stephanie Lee, and Jayne Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Information security is especially important for federal agencies, where the public's trust is essential and poor information security can have devastating consequences. Since 1997, GAO has identified information security as a governmentwide high-risk issue in each of its biennial reports to the Congress. Concerned by reports of significant weaknesses in federal computer systems, Congress passed the Federal Information Security Management Act (FISMA) of 2002, which permanently authorized and strengthened information security program, evaluation, and annual reporting requirements for federal agencies. GAO was asked to testify on the current state of federal information security and compliance with FISMA. This testimony summarizes (1) agency progress in performing key control activities, (2) the effectiveness of information security at federal agencies, and (3) opportunities to strengthen security. In preparing for this testimony, GAO reviewed prior audit reports; examined federal policies, guidance, and budgetary documentation; and analyzed agency and inspector general (IG) reports on information security. Over the past several years, federal agencies consistently reported progress in performing certain information security control activities. According to the President's proposed fiscal year 2009 budget for information technology, the federal government continued to improve information security performance in fiscal year 2007 relative to key performance metrics established by the Office of Management and Budget (OMB). The percentage of certified and accredited systems governmentwide reportedly increased from 88 percent to 92 percent. Gains were also reported in testing of security controls - from 88 percent of systems to 95 percent of systems - and for contingency plan testing - from 77 percent to 86 percent. These gains continue a historical trend that GAO reported on last year. Despite reported progress, major federal agencies continue to experience significant information security control deficiencies. Most agencies did not implement controls to sufficiently prevent, limit, or detect access to computer networks, systems, or information. In addition, agencies did not always manage the configuration of network devices to prevent unauthorized access and ensure system integrity, patch key servers and workstations in a timely manner, assign duties to different individuals or groups so that one individual did not control all aspects of a process or transaction, and maintain complete continuity of operations plans for key information systems. An underlying cause for these weaknesses is that agencies have not fully or effectively implemented agencywide information security programs. As a result, federal systems and information are at increased risk of unauthorized access to and disclosure, modification, or destruction of sensitive information, as well as inadvertent or deliberate disruption of system operations and services. Such risks are illustrated, in part, by an increasing number of security incidents experienced by federal agencies. Nevertheless, opportunities exist to bolster federal information security. Federal agencies could implement the hundreds of recommendations made by GAO and IGs to resolve prior significant control deficiencies and information security program shortfalls. In addition, OMB and other federal agencies have initiated several governmentwide initiatives that are intended to improve security over federal systems and information. For example, OMB has established an information systems security line of business to share common processes and functions for managing information systems security and directed agencies to adopt the security configurations developed by the National Institute of Standards and Technology and Departments of Defense and Homeland Security for certain Windows operating systems. Opportunities also exist to enhance policies and practices related to security control testing and evaluation, FISMA reporting, and the independent annual evaluations of agency information security programs required by FISMA. |
The Service has a long history of contracting for mail transportation dating back to the beginning of the Post Office in 1775. Since then, the Service has contracted for mail to be carried by steamship, stagecoach, horse, rail, airplane, motor vehicle, boat, snowmobile, and even mule train into the Grand Canyon. In 1845, Congress passed legislation to reduce mail transportation costs by moving from contracts with stagecoach companies to contracts with individuals to transport mail by horseback. The routes these individuals took became known as star routes. Most star route carriers had 4-year contracts and traveled by horse or horse-drawn vehicle until the early 20th century. In 1948, Congress allowed the Postmaster General to renew 4-year contracts to star route carriers with satisfactory service rather than requiring the contracts to be competitively bid. Between 1960 and 1970, star route miles more than doubled. In the 1970s, star routes officially became known as highway contract routes. There are three different types of highway contract routes: Transportation routes: Contractors transport mail between postal facilities. Contract delivery service (CDS) (commonly referred to as box routes): Contractors deliver mail and provide services similar to those provided by postal rural carriers. Combination routes: Contractors provide a combination of mail transportation between postal facilities and mail delivery services to individual addresses along their routes. In 2007, the Service had nearly 17,000 highway contract routes, including 8,968 transportation routes, 6,708 CDS delivery routes, and 1,218 combination routes. The Service also contracts for air mail transportation services with private contractors, including FedEx and seven commercial airlines. In addition, it contracts for mail to be transported by rail and boat. The Service’s outsourcing of mail transportation and some delivery services predates the ability of postal employee unions to collectively bargain. The Service established city delivery services provided by postal letter carriers on designated city routes in 1863, but did not initially extend delivery services to rural areas. Eventually, the Service set up permanent rural routes with postal rural letter carriers in 1902. The 1970 Postal Reorganization Act authorized postal unions to collectively bargain with the Service on employee wages, hours, and other terms and conditions of employment. Subsequently, the unions negotiated for protection from layoffs. The act provided for binding arbitration if an impasse persists 180 days after the start of bargaining, unless parties agree to an alternate process. Four postal unions represent most non-management postal employees and negotiate for them during collective bargaining: The American Postal Workers Union represents various employees including clerks, building and equipment maintenance employees, motor vehicle operators, motor vehicle maintenance employees, and nurses; The National Association of Letter Carriers represents carriers who deliver mail on city routes; The National Rural Letter Carriers’ Association represents carriers who deliver mail on rural routes; and The National Postal Mail Handlers Union represents mail handlers who work in postal processing facilities. The Service has separate bargaining agreements with each union and in 2006 or 2007 signed agreements with all four unions that expire in either 2010 or 2011. The total number of career postal employees, as well as the number of bargaining unit employees, declined 14 percent from 1998 through 2007 through attrition, as shown in table 1. The Service achieved these reductions without layoffs but improved its operational efficiency and productivity through increased automation and other initiatives. These efficiency gains allowed the Service to operate with fewer employees. According to the Service, there are significant cost advantages to contracting for transportation services. For example, although the service provided by the three different types of delivery— city, rural and contract—is generally similar, the Service states that there are significant cost differences between them, primarily due to differences in each of the carriers’ compensation systems. The systems for city and rural carriers are collectively bargained between the Service and its associated unions. Generally speaking, city carriers are compensated on an hourly basis, which can include overtime; rural carriers are compensated on a salary basis; and contract carriers are compensated via a contract. Similarly, there are significant cost differences between transportation provided by Service employees and highway contract routes. According to the Service, its fiscal year 2007 costs for delivery and transportation services provided by postal employees and contractors are shown in table 2. Furthermore, in the retail area, the Service establishes contract postal units because they can provide the same service as a post office but at less cost, since the Service does not incur building and operating expenses associated with maintaining post offices. In 2007, 4,026 of the Service’s 36,721 post offices, stations, and branches—or 11 percent—were contract postal units. The Service has no statutory restrictions on the type of work it may outsource, but union collective bargaining agreements impose some limitations. Additionally, statutes and regulations authorize and guide the Service’s outsourcing process. The Service follows its normal purchasing procedures for all outsourced services but additional procedural steps are required to comply with collective bargaining agreements in some outsourcing cases. Through the collective bargaining process, employee unions have reached agreement with the Service that resulted in changes to its outsourcing decision-making process. However, the unions have also grieved a number of the Service’s decisions to outsource. Overall, we could not determine the extent of the Service’s outsourcing that has impacted bargaining unit work because the Service does not separately track the subset of transportation contracts that impact bargaining unit work. The Service did provide data related to some of its outsourcing, including in its retail, processing, and delivery functions. Since 1996, the Service evaluated 46 national-level outsourcing proposals under the requirements of Article 32 and determined that 5 had a significant impact on bargaining unit work. The Service also provided data showing that outsourced delivery service accounts for approximately 2 percent of all deliveries. Outsourcing is accomplished through the Service’s purchasing function and statutes and regulations that apply to the Service’s purchasing function also apply to outsourcing. Applicable statutes contain no specific restriction on outsourcing, and specifically, 39 U.S.C § 5005 authorizes the Service to enter into contracts for transportation services. Additionally, the Service may negotiate or enter into certain contracts without competition. For example, the Service negotiated and awarded a contract without competition to FedEx for air transportation services and can renew highway transportation contracts without competition. In addition, Congress has applied certain purchasing-related requirements to the Service that apply to other federal government agencies but are not applicable to private entities. For example, the Service Contract Act of 1965 requires some Service contractors to pay minimum prevailing wages and benefits to employees. Collective bargaining agreements with the postal employee unions may impose limitations on outsourcing in certain areas. For example, the most recent City Carrier’s collective bargaining agreement restricts the Service from outsourcing delivery services in areas where only city carriers provide mail delivery. Similarly, APWU’s collective bargaining agreement restricts some contracts for custodial services based on the size of the area to be maintained. The Service’s collective bargaining agreements also each contain a provision, Article 32, which establishes certain procedural requirements to be conducted when making an outsourcing decision, but does not, according to the Service and the unions, restrict the type of work that can be outsourced. For example, Article 32 requires the Service to evaluate how outsourcing proposals would affect bargaining unit employees and, under certain circumstances, to notify the unions of its intent to consider outsourcing and allow the unions to have input into the decision-making process. According to the Service, neither its purchasing regulations nor the collective bargaining agreements restrict or limit a contractor’s ability to subcontract work it is contractually required to perform or provide to the Service. However, the Service may include provisions in a contract that govern subcontracting. For example, a contract may require a contractor to notify the Service of its intent to subcontract, thereby allowing it to assess the qualifications of the proposed subcontractor with the same criteria used to assess the contractor’s qualifications. The Service uses its purchasing process for implementing outsourcing decisions and performs additional steps when required to comply with procedures in Article 32 of the collective bargaining agreements. Outsourcing is formalized through a contractual relationship between the Service and the service provider, whether the provider is a large corporation or an individual. The Service recently changed its purchasing regulations and procedures to streamline its purchasing process and create a more flexible, efficient, businesslike approach to purchasing. The revised process covers all purchasing, including outsourcing, and is divided into six general steps: Identify needs. Evaluate sources. This step includes developing a request for proposals and soliciting bids. Select suppliers. This step includes awarding a contract. Deliver and receive requirements. Measure and manage supply. This step includes managing contract performance. End of life. GAO reviewed these changes in a report issued in December 2005 and found them generally consistent with the principles and practices of leading organizations. Accordingly, in this review, we limit our discussions to portions of the purchasing process that are relevant to outsourcing. In addition to these purchasing steps, to comply with the Article 32 provisions, the Service must conduct two evaluations of outsourcing initiatives under consideration. First, it must address five factors—public interest, cost, efficiency, availability of equipment, and qualification of employees—when evaluating the need to outsource. Second, it must determine whether the outsourcing will have a “significant impact” on work performed by bargaining unit employees. Service officials told us that when making these evaluations, they do not use formal criteria and none of the five factors carries more weight than the others. Further, officials said that the five factor evaluation is similar to the type of analysis performed to make other business decisions and that the collective bargaining agreements do not contain specific guidance for performing the evaluations and do not define the term “significant impact.” However, the Service said that when determining whether a proposal has a significant impact, at a minimum, the initiative must be national in scope. In addition, the Service considers the material aspects of the initiative, including, but not limited to, the number of employees, work hours and facilities affected, the geographic distribution of the employees and sites affected, and any other factor that provides insight to the particular determination. Further, the Service said that no one factor will necessarily be determinative and not all factors will necessarily shed light on every project. If the Service determines that outsourcing will have a significant impact, Article 32 contains additional requirements. Although the specific requirements vary by agreement, the Service must always notify the affected union of its intent to consider outsourcing and must consider union input before making a decision. Conversely, if the Service determines the outsourcing will not have a significant impact, the Service may still have further actions it must take. Under its agreement with APWU, the Service has certain notification requirements for highway contract routes. In addition, if requested by the City or Rural Carriers, the Service must provide information on contracted delivery routes in certain circumstances. We identified two different categories of outsourcing initiatives: (1) outsourcing that was approved at the national level, was unique and infrequent—occurring only 5 times since 1996, and had a significant impact on bargaining unit work; and (2) outsourcing that was approved at the field level, was purchased frequently and repeatedly, and, according to the Service, did not have a significant impact on bargaining unit work. The Service has established guidelines for evaluating proposed outsourcing initiatives and ensuring that certain factors are considered, such as whether an initiative is consistent with organizational goals, security, and integrity; offers a cost or service advantage; and will maintain quality levels. Additionally, the guidelines provide a framework for complying with Article 32 requirements. Overall responsibility for outsourcing proposals lies with a sponsor, typically a Service Headquarters Vice President. The sponsor’s responsibilities include developing and presenting the outsourcing concept, conducting financial and cost analyses, securing the necessary approvals, and ensuring compliance with all labor agreements. The sponsor presents the proposed outsourcing initiative to a Strategic Initiatives Action Group (SIAG), an internal cross-functional group formed to facilitate concept review and approval and ensure conformance with Article 32 requirements. The SIAG includes representatives from Service departments, including Labor Relations, Legal, Finance, Operations, Supply Management and Communications, who assist sponsors of proposed outsourcing initiatives with the various procedural steps required for outsourcing at the national level. SIAG evaluates the level of impact expected from the proposed outsourcing initiative by scrutinizing the functions to be performed in the initiative. If the SIAG determines that an outsourcing initiative will have a significant impact on work performed by bargaining unit employees, the affected unions must be notified and allowed to provide input into the analysis considered when comparing the performance of proposed work by postal employees and by a contractor. The final approval for an outsourcing initiative with a significant impact on bargaining unit work, supported by an evaluation of the five factors, must come from an approval board consisting of the Deputy Postmaster General/Chief Operating Officer, Chief Financial Officer, and Chief Human Resources Officer. If SIAG determines that national-level outsourcing will not have a significant impact, the Service will still perform the cost analysis that is part of the normal purchasing process; however, union notification and input are not required. The final decision to outsource a national outsourcing initiative that will not have a significant impact on the bargaining units is based on an evaluation of the five factors mentioned above and is made by management within the group that proposed the initiative. One example of a national-level outsourcing proposal that the Service approved is a proposal to outsource certain functions previously performed by postal employees at AMCs across the country. Under the proposal, the AMC facilities would be closed and outsourcing would occur in contractor facilities. In the process of making the outsourcing decisions, the Service: determined that the proposal would have a significant impact on prepared a comparative analysis to document its consideration of the five notified the affected unions at major milestones, solicited and incorporated union input or responded in writing as to why specific concerns were not incorporated, and decided to proceed with the outsourcing proposal. In providing input to this AMC proposal, the union disagreed with assumptions underlying the Service’s estimates of several factors that could affect the outcome of the analysis, including wage rates and experience levels for both contractors and postal employees, and the level of overtime required to perform the job. Most outsourcing is performed at the field level using established processes that include steps to comply with Article 32 requirements. Service officials told us that field-level outsourcing typically is for contract delivery, highway transportation, custodial and vehicle maintenance services, and involves thousands of contracts. Generally, outsourcing at the field level does not have a significant impact on bargaining unit work, according to the Service, and consequently does not require consideration of union input. For example, the Service has an established process for contracting out delivery service that includes consideration of the five factors but does not require union notification or input. However, the City and Rural Carrier’s collective bargaining agreements require the Service to provide cost information on contracted delivery routes in certain circumstances if requested by the union. Further, APWU’s collective bargaining agreement with the Service has additional requirements the Service must meet when contracting out for highway transportation services. In this case, the Service has an established process set forth in a Service handbook, which incorporates steps required under Article 32 for transportation routes that meet certain criteria. When contracting initially or renewing a contract for a transportation route that meets the criteria, the Service must perform a five factor evaluation, including a cost comparison, notify the union, and allow the union to have input into the outsourcing decision. The final decision to outsource is made at the field level. Table 3 summarizes the national and field-level outsourcing processes, compares the requirements, and includes examples of outsourcing. With the overall number of deliveries growing each year, on average, by about 1.7 million deliveries, the Service has taken steps to minimize the impact of these additional deliveries. For example, area and district managers are expected to have a growth planning process in place, conduct cost analyses on the type of delivery to provide, and examine the feasibility of offering CDS service in lieu of rural or city delivery service, consistent with the Service’s contractual obligations. However, the Service must consider a variety of factors before assigning new deliveries to a particular type of delivery, including the type of carrier historically used in the area, Article 32 requirements, including cost, and projected population growth. To ensure that these policies are consistently applied and the appropriate factors are considered, the Service introduced a computerized growth management tool. This tool standardizes the process that field officials use to determine whether new deliveries should be assigned to city, rural or contract carriers. For example, if a new delivery is in an area served by city carriers, the address will likely be assigned to a city carrier route. Conversely, if a new delivery is in an area that does not have existing delivery service, the Service must compare the costs of each different delivery type to comply with Article 32. The Postmaster General has testified that the Service has been exploring the expanded use of CDS, because it is one of the most cost-effective delivery modes available. Postal employee unions have disagreed with the Service’s outsourcing decisions including those related to the impact of proposed outsourcing on bargaining unit work. To address disagreements with the Service, unions have two options: formally grieving decisions using the process defined in collective bargaining agreements or addressing concerns in subsequent rounds of collective bargaining. For example, according to a union official, the Mail Handlers grieved the Service’s determination that outsourcing the processing of some military parcels did not have a significant impact on bargaining unit work. In this case, the Service notified the affected union at the local level, but not at the national level. The grievance was eventually settled by an arbitrator, who, according to the Service, decided that the initiative did have a significant impact but did not reverse the Service’s outsourcing decision. In another example, APWU grieved the Service’s 1991 decision to outsource certain jobs in remote encoding centers, where employees manually read and enter address information for addresses on letters that cannot be read by automated mail processing equipment. A 1993 arbitration decision determined that the Service did not violate Article 32 because the Service had considered the five factors but also required the Service to offer jobs at these centers to postal employees before contracting out such work. The unions have also addressed concerns about outsourcing through collective bargaining. The City Carriers’ current collective bargaining agreement restricts the Service from outsourcing delivery services in areas where only city carriers deliver mail. Previously, APWU and the Mail Handlers agreed in a memorandum of understanding, applicable from 1998 through 2000, to a moratorium on most new, national-level outsourcing that would affect their bargaining unit. In addition, the collective bargaining process has resulted in modifications to Article 32 procedures. For example, arbitration proceedings that followed collective bargaining negotiations in 2000 changed Article 32 in each union’s collective bargaining agreement to include a provision that the Service would meet with the unions while developing its initial comparative analysis for outsourcing proposals and include a statement of the unions’ views and proposals in this analysis. Overall, the Service could not provide information on the total extent of its outsourcing activities that have impacted bargaining unit work because the contracts related to bargaining unit work are not separately tracked. Since 1996, the Service reviewed 46 outsourcing initiatives using its national-level decision-making process and determined that 5 impacted bargaining unit work, primarily related to retail and processing functions. The Service approved and implemented all five initiatives but terminated one in 2001. The Service did provide fiscal year 2007 expenditure data related to the remaining four outsourcing initiatives, as indicated in table 4. In addition, the Service provided information related to its total contract costs for transportation, but could not separately report on the subset of transportation contracts that have an impact on bargaining unit work. The Service’s expenditures for transportation contracts totaled about $6.5 billion in fiscal year 2007, which was about 8 percent of its total operating expenses, while expenditures for outsourced delivery services totaled about $220 million. Finally, the Service also provided information related to the number of deliveries made by contractors. The Service said that, through outsourcing, it seeks to improve its operations and customer service, as well as save money, though not every initiative is expected to achieve every goal. For example, it may be sufficient for a contract to improve service, but not save money. Further, it said that contractors may operate more efficiently than the Service in a number of ways including by compensating its employees at lower rates than the Service and by employing more part-time workers. The Service cited these reasons for outsourcing in each of the five national outsourcing initiatives, as follows. The Service implemented two of the five initiatives that impacted retail and processing functions, Mail Transport Equipment Service Centers and Corporate Call Management, about 10 years ago. The Mail Transport Equipment Service Centers initiative was intended to establish a national network of service centers for processing, repairing, and storing equipment and supplies used to move mail, such as mailbags, trays, and carts. The Service anticipated improving the availability and management of the mail equipment, saving money, and improving efficiency at mail processing facilities by reducing mail equipment administrative responsibility. The Corporate Call Management initiative was intended to establish national call centers to provide a single toll-free number that would give customers access to postal services and information. The Service anticipated improving customer satisfaction, saving money, and improving efficiency by contracting with companies with demonstrated success in call center operations. According to the Service, these functions were previously decentralized and inefficient when they were performed by postal employees at mail processing plants or at post offices. Two other initiatives, Terminal Handling Services and AMCs, were implemented more recently and impacted work in the processing function. Mail that is transported by air needs to be delivered for a departing flight operated under a contract with either a commercial airline or FedEx, or picked up from an arriving flight. These activities are generally known as terminal handling services. The Service outsourced these services to various suppliers at about 60 airports where FedEx was providing air mail transportation services under a contract with the Service. The Service anticipated saving money, deferring large capital expenditures for facilities and equipment, and improving efficiency by contracting with companies that had demonstrated success in terminal handling operations. Similarly, for the AMC initiatives, the Service outsourced, or plans to outsource, for terminal handling services for mail transported by commercial carrier flights at 20 airports. In 2004, the Service had about 70 AMCs located across the country that processed mail arriving at and departing from airports and performed terminal handling operations. However, the need for these functions has decreased over time because of reductions in mail volumes, excess processing capacity at other processing facilities, and a reduction in the number of commercial air carrier contracts. The Service scrutinized the functions performed at each of its AMCs and, as of June 2008, decided to close 20 and outsource the required terminal handling services, retain 6 and continue operations with postal employees, and close the remaining AMCs. The Service anticipated saving money, closing facilities and improving efficiency by outsourcing with companies that had demonstrated success in terminal handling services. The final initiative listed in table 4, Priority Mail Processing Network, was intended to be a pilot project to test whether the Service could improve Priority Mail delivery performance by using a dedicated processing and transportation network. The Service contracted with Emery Worldwide to operate a network of 10 Priority Mail processing centers located along the East Coast. The Service anticipated saving money and improving efficiency by allowing the contractor to structure its workforce outside the rules of the collective bargaining agreements. The Service ultimately cancelled the contract because of problems with the contractor’s performance and cost overruns and brought these functions back in-house to be performed by postal employees. The Service reviewed 41 other national-level outsourcing proposals and determined they did not have a significant impact on bargaining unit work. According to Service officials, many of these initiatives involve one time activities such as preparing sites for installing and testing mail processing equipment—activities required to obtain warranty coverage for the equipment. For example, the Service contracted for site preparation, installation, and testing of equipment for the Automated Package Processing System, as part of the deployment of this system. Additionally, the Service contracted with multiple suppliers for inspection, design, and construction services to bring 27,000 leased postal facilities into compliance with the Architectural Barriers Act, which requires equal access for persons with disabilities. See appendix II for a list of the 41 outsourcing proposals that the Service determined did not have a significant impact on bargaining unit work. Finally, the Service explained that it has consistently maintained that contract postal units do not constitute outsourcing because they are not referenced in Article 32 or in any other provision of the collective bargaining agreements. Although the Service contracts for most of its mail transportation needs, only some contracted transportation services affect bargaining unit work and are thus considered outsourcing; however, the Service was not able to determine the actual value or number of these outsourced contracts. The Service moves mail around the country using both contracted services, such as highway contract routes and commercial air carriers, and Service- owned vehicles driven by postal employees. Only a portion of the more than 17,000 contracts for transportation services are subject to the provisions of Article 32 in the Service’s collective bargaining agreement with APWU, which represents postal employees who are truck drivers. As previously discussed, these Article 32 provisions apply only to contracts for highway transportation routes that meet certain criteria: a value of more than $100,000 per year for a fixed annual rate contract and not more than 350 miles in round-trip length; an annual or non-annual rate contract where estimated annual compensation will exceed $45,000; and no more than 8 hours in operating time. According to the Service, in fiscal year 2007 it spent about $3.15 billion on contracts for highway transportation. However, to obtain the value or number of contracts affected by Article 32, it would be necessary to review each contract to determine the Service’s actual costs, which may exceed the contracted costs if the contractor provides, for example, extra trips or services. Data provided by the Service showed that outsourced deliveries represent less than 2 percent of all deliveries. The Service contracts for delivery services and all delivery contracts have an impact on bargaining unit work and are thus outsourcing. The Service delivers mail to residential and business addresses using employees, either city carriers or rural carriers, or CDS contractors. The Service has data on the number of deliveries it makes and the number of delivery routes. In general, the average number of deliveries per route is greatest for city carrier routes and lowest for CDS routes. Since 1998, the number of deliveries made by all three delivery types (city, rural, and contractor) has increased, but the proportion of deliveries made by contractors has remained about the same, at 2 percent or less, as shown in table 5. However, over the past decade, the number of deliveries grew more for contractors, (39 percent) than for city carriers, (6 percent) and rural carriers, (34 percent). Similar trends are evident in the number and growth of routes. Over the past decade, the proportion of routes serviced by contractors has remained about the same, at less than 3 percent. However, the number of CDS routes grew about 23 percent, while city routes declined by 2 percent and rural routes also grew by 23 percent. A union official has expressed concern that, while contract delivery service is a relatively small percentage of deliveries now, it could expand rapidly because of continued new delivery growth. Table 5 compares the change in total number and growth in deliveries and routes by type of carrier for 1998 and 2007. The Service evaluates contractors and postal employees using similar suitability and performance standards, while holding them accountable using different management processes. A key concern of some stakeholders who may be affected by the Service’s outsourcing decisions is whether contractors and subcontractors must have the same qualifications or meet the same suitability and performance standards as postal employees. To ensure that personnel are suitable to perform postal work, the Service uses similar screening criteria to evaluate both contractors and applicants for postal employment. Likewise, contractors and postal employees performing the same type of work generally have similar performance standards although the Service manages contractors differently from postal employees. The Service uses similar criteria to evaluate the suitability of both potential contractors and applicants for postal employment. The Service is responsible for ensuring the security and sanctity of the mail and ensuring a safe workplace for its employees. One way the Service meets this responsibility is by evaluating the suitability of potential contractors and applicants for postal employment. To do so, it investigates and verifies their employment, criminal, and driving histories and requires them to undergo an initial screening for drug use. In the fall of 2007, the Service revised its drug screening procedures so that it had similar criteria to evaluate both potential delivery service contractors and applicants. To be considered for a contract or postal employment, individuals must meet each of the applicable suitability standards shown in table 6. In general, these suitability standards apply to contractors and their employees or subcontractors who will have access to the mail or to postal facilities. For example, if an individual CDS contractor delegates his or her delivery responsibility to another person, such as a relative or an employee, the contractor is required to notify the Service and this other person is required to undergo the same screening process. However, not all employees of a contractor are required to be screened. For example, the Service requires companies providing terminal handling services to ensure that all employees who have access to the mail are screened and that measures are in place to limit access to areas where mail is stored or sorted but does not require that all employees be screened. In addition to these initial screening requirements, contractors who come into contact with the mail are periodically re-screened. For example, CDS contractors must be re-screened when their contract is renewed, which is typically every 4 years. Once hired, most postal employees are not re-screened, except for holders of commercial driver’s licenses. The Service holds both contractors and postal employees to similar performance standards. In many instances, contractors are performing essentially the same work as postal employees. For example, CDS, highway contract route, and custodial contractors perform essentially the same tasks as postal rural carriers, motor vehicle operators, and maintenance employees. In these cases, where the work is directly comparable, the Service expects the same level of performance regardless of whether the function is done by a contractor or a postal employee. In other instances, however, the work is not directly comparable, because contractors are performing the work differently from postal employees. A Service official told us that in these outsourcing cases, the Service establishes performance criteria in its contracts, such as processing air mail within specified time frames, to help it achieve overall goals, such as delivering mail on time. For example, the Service contracted with terminal handling service providers for work previously performed by postal employees at AMCs. In its contracts, the Service establishes performance criteria but does not specify how the contractor is to achieve them. To further illustrate these performance standards, we compare the work performed by CDS contractors and rural carriers, as well as by terminal handling service contractors and AMC employees below. Both CDS contractors and rural carriers perform similar delivery service functions and serve similar geographical areas in meeting the requirements of their respective positions. Upon reporting for work, CDS contractors and rural carriers are expected to prepare and sequence the mail for delivery to customers on their routes. Next, the CDS contractors and rural carriers take the mail to their delivery vehicles. Both CDS contractors and rural carriers furnish and maintain the vehicle equipment necessary for mail delivery unless specifically assigned a Service-owned or leased vehicle. CDS contractors and rural carriers proceed to deliver and collect the mail along their assigned routes while meeting pre-designated time delivery standards. Finally, CDS contractors and rural carriers return to their respective postal facilities to hand-off the mail collected on their routes. Both CDS contractors and rural carriers are expected to follow all traffic safety laws and regulations while ensuring protection of the mail from theft, mishandling, or damage. As previously discussed, the Service outsourced some terminal handling services to contractors. Formerly, postal employees performed these services at selected AMCs across the country. After the terminal handling responsibility moved to contractors, the performance expectations remained the same, but the performance standards differ. Prior to outsourcing, AMC postal employees were expected to carry out activities, such as receiving, sorting and delivering mail to the appropriate air carrier, to help the Service achieve overall delivery goals. Under the new arrangement, contract workers perform similar activities with similar performance expectations. Although the Service does not specify the processes that contractors must employ to achieve these expectations, it does set specific performance standards. For example in one contract, the Service requires that the contractor receive mail from an air carrier and make it available to postal employees within 1 hour. In another contract, the Service requires the contractor to reach an on-time performance standard 98 percent of the time. The Service includes similar performance standards in its contracts with all terminal handling service providers. The Service manages and conducts oversight of both contractors and postal employees, but uses different mechanisms. The Service manages contractors through the terms of a contract, which generally includes specific performance requirements, while the Service manages postal employees according to established policies in handbooks and provisions in collective bargaining agreements. Both methods provide specific disciplinary procedures, including termination. Although some stakeholders have raised concerns about the extent to which disciplinary problems or criminal behavior may be an issue with contractors, data are not available to allow a comparison between contractors and postal employees. The Service uses contracts to define work requirements and performance standards to monitor performance. Common categories for measuring supplier performance are cost, quality, delivery, responsiveness, and technology. To provide oversight, the Service administers contracts using contracting officers and administrative officials. A contracting officer is authorized to award, alter, and terminate contracts and ensures that the contractor provides the services required under the terms of the contract. An administrative official is responsible for ongoing contractor oversight and monitoring, including screening all contractors before hiring, supervising the contractor’s operations daily, investigating irregularities and complaints, and recommending establishment, discontinuance, or modifications in existing routes. For example, an administrative official, such as a Postmaster, is required to document highway contract route performance, including contract delivery service, on a daily basis and monitor such metrics as reporting and departure times (for mail delivery) and deviations from the terms of the contract, such as safety deficiencies and operational failures. If the terms of the contract are not being met, the administrative official can take the following disciplinary actions as needed: 1. Review: the official reviews the irregularities, consults with the contractor, and takes appropriate action. 2. Conference: the official arranges a conference with the contractor and contracting officer to discuss irregularities of a serious nature and the need to immediately take corrective action. 3. Written Warning: If the conference does not rectify the service problem, the official warns the contractor in writing that the case will be forwarded to the contracting officer if service does not improve within 3 days. 4. Recommendation: If service has not improved within 3 days, the official forwards the case to the contracting officer with recommendations on actions to take, which could include a recommendation to terminate the contract. The contracting officer has the sole authority to terminate the contract if the terms of the contract are not being met. The Service manages employees in accordance with applicable collective bargaining agreements, statutes, regulations, policies, and handbooks. To discipline an employee, a supervisor must follow disciplinary procedures set forth in each of the respective collective bargaining agreements. The agreements state that no postal employee may be disciplined or discharged except for just cause such as, but not limited to, insubordination, pilferage, intoxication (drugs or alcohol), incompetence, failure to perform work as requested, violation of the terms of the collective bargaining agreement, or failure to observe safety rules and regulations. The collective bargaining agreements set forth disciplinary actions that the service may take when disciplining a postal employee, including: 1. Discussion: a supervisor discusses minor offenses; these discussions are not disciplinary actions and are not grievable. 2. Letter of Warning: a supervisor gives an employee a disciplinary notice in writing explaining the deficiency. 3. Suspension of 14 Days or Less: an employee may be suspended for up to 14 days. 4. Suspensions of More Than 14 Days or Discharge: an employee may be suspended without pay for more than 14 days or may be terminated. 5. Indefinite Suspension Crime Situation: an employee may be suspended indefinitely if the Service has reasonable cause to believe the employee committed a crime for which imprisonment could be imposed. 6. Emergency Procedure: an employee may immediately be placed in off- duty status when an allegation involves intoxication, pilferage, failure to observe safety rules and regulations, or where retaining the postal employee on duty may result in damage to USPS property, loss of mail or funds, or where the employee may be injurious to self or others. Any Service disciplinary actions initiated against postal employees are subject to the grievance-arbitration process provided for in their respective collective bargaining agreements. Furthermore, if a disciplinary action is later overturned, the Service may have to reinstate the employee and provide restitution in the form of back pay. Although employee union officials have raised concerns about contractors’ trustworthiness, we were not able to compare the extent to which contractors and postal employees have had disciplinary problems, because the Service does not centrally collect data related to disciplinary actions. Unions have cited examples of irresponsible or illegal activity by contractors that resulted in arrests or convictions, which they said could undermine the public’s trust in the Service. But similar postal employee activity has also led to arrest or conviction. Contracting officers document disciplinary actions in individual contractor files and Service personnel, likewise, document disciplinary actions in records for each postal employee. These employee records are maintained at the facility where the employee works and any disciplinary documents are removed from an employee’s file after 2 years, unless the disciplinary case has not yet been resolved or the employee has been cited in subsequent disciplinary actions. However, the Service does not maintain a comprehensive database of actions that have been taken against either contractors or postal employees. For disciplinary actions of a more serious nature, such as criminal investigations, the Service investigates both contractors and postal employees in a similar manner. For example, the Service treats the theft of mail as a serious matter to be investigated regardless of whether the crime is committed by a contractor or a postal employee. A contractor or postal employee, if found guilty of mail theft, can be fined, imprisoned, or both. The Service lacks information and data about the results of its outsourcing efforts that impact work by its bargaining unit employees, which could be used to determine the effectiveness of its outsourcing and support future outsourcing in the face of possible challenges. For example, the Service does not know the savings related to its outsourcing efforts because it does not have a process to evaluate the impact of outsourcing or to track actual savings. Postal employee unions have expressed skepticism about the value of outsourcing and have raised questions about the reality of cost savings and implications of outsourcing on a variety of public policy issues. Without data to demonstrate results, Service management, stakeholders, and Congress are not able to assess the risk and value of outsourcing, accountability for results is limited, and the Service is not able to effectively address union concerns. In addition, the Service may encounter challenges that include resistance from its unions to new outsourcing initiatives and from legislation pending in Congress that could limit its ability to outsource. The Service does not collect information about the results or effectiveness of its outsourcing efforts, which could limit its ability to determine whether outsourcing is achieving expected efficiencies and to generate support for future outsourcing efforts. For example, although the Service has processes in place to measure Service-wide performance—to capture savings and to measure operational efficiency improvements—neither of these processes provides information about the effectiveness of individual or aggregate outsourcing efforts or the extent to which outsourcing is contributing to these improvements. Further, the Service has agreed in its collective bargaining agreements to no layoffs for career bargaining unit employees; therefore, when it outsources functions previously performed by postal employees, those employees are generally moved to other positions within the Postal Service, but the specific efficiency gains related to these reassignments are not tracked. The Service does not track savings resulting from its outsourcing efforts and instead tracks savings of all its cost-reduction efforts on an aggregate, Service-wide basis. The Service said it has a process where savings from all identified cost-reduction efforts, including those involving outsourcing, are removed from the budget during the planning process for the upcoming year in support of its annual $1 billion cost-reduction goal. After those budget reductions are made, if the Service does not exceed its overall expense budget, it considers the overall cost-reduction goal to have been achieved. The Service said that because of the complexity and interrelationship of its many cost-reduction initiatives, it is difficult to track actual cost savings from individual initiatives and that it would require expenditures of additional resources to isolate the savings from each initiative. The Service achieved its overall annual cost-reduction goal in 4 of the last 5 years, but could not determine the specific contribution made by outsourcing efforts. Similarly, the Service does not measure whether outsourcing initiatives have resulted in more efficient operations. Instead, the Service said it uses a measure called Total Factor Productivity to determine the change in aggregate productivity, which is included in the Service’s published quarterly financial statements and its annual report. The Service has reported that it has increased its Total Factor Productivity for 8 consecutive years, but could not demonstrate the extent to which outsourcing contributed to that improved efficiency. The Service has used a performance measurement approach on a previous outsourcing effort and determined that it was not achieving desired results. In the late 1990s, the Service sought to establish, as a pilot test, a separate processing and transportation network for its Priority Mail business segment in order to improve service and to be more competitive in the marketplace. Ordinarily, Priority Mail was processed with other mail and was not achieving the level of service performance the Service desired. The Service awarded a contract worth more than $1.7 billion to Emery Worldwide to create and operate a network of 10 processing centers on the East Coast. In its contract, the Service established specific performance measures, including (1) a 95 percent on-time performance for the 2-day delivery of mail and (2) the use of a contractor reliability index, independently verified, which measures contractor performance on each of eight quality indicators. In 1999, the U. S. Postal Service Office of Inspector General (OIG) reported that Emery was not achieving the on- time performance goal or other performance measures. The Service eventually cancelled its contract and took over operations of the facilities because Emery was not achieving the desired results. Despite its previous use of such measures, the Service has not used them in its most recent national-level outsourcing effort. In February 2008, the OIG reported on its audit of the outsourcing of some operations at the St. Louis AMC, which was part of the Service’s AMC outsourcing initiative. The OIG found that, while Service management had generally complied with outsourcing policy, opportunities existed to enhance guidance for measuring results. Specifically, the Service had not established policies or procedures for determining if outsourcing initiatives achieved intended results and did not require a post-implementation review of outsourcing initiatives. Further, the OIG report stated that without such a review, there was no accountability or assurance that the outsourcing initiative achieved anticipated results. The Service agreed with the OIG’s recommendation to establish a post-implementation review program that compares anticipated savings with actual results and stated that such a program would be developed by March 31, 2008. In June 2008, the Service told us it is working with the OIG to develop a review program but added that reviews of all outsourced AMCs would depend on the results of its initial reviews. The Service did not indicate whether it would use a similar review for other outsourcing initiatives, but the Service has begun to track one indicator of outsourcing performance—cost savings. Service officials told us that for the AMC initiative, the Service saved, or expects to save by 2009, about $117 million by eliminating facility lease and labor expenses at AMCs. Not all of these savings are attributable to outsourcing activities, however, because the Service also included the savings it realized by closing some facilities. Without complete information about the results of its outsourcing efforts, Service management, stakeholders, and Congress are not able to assess the risk and value of outsourcing and accountability for results is limited. Specifically, the Service should be able to address the following questions: Does outsourcing, either of a specific function or at an aggregate level, save money? Result in increased effectiveness? Provide some other value? What are the risks associated with outsourcing? How cost-effective is outsourcing? Is the level of satisfaction of customers served by contractors comparable to those served by postal employees? What impact does increased use of contractors have on the safety of the mail, mail facilities, employees, and customers? Looking forward, the Service is considering another major outsourcing initiative involving its bulk mail processing network, which could impact work done by two unions. Stakeholders may raise questions about effectiveness, cost savings, and other anticipated outcomes. Responses to these questions will be important to inform decision-making. As it considers future outsourcing, the Service faces a number of challenges, including differing messages from Congress and the Administration on outsourcing and the potential impact of outsourcing on its relations with its employee unions. Two bills pending in Congress could affect the Service’s outsourcing efforts. H.R. 4236 would require the Service to bargain with postal unions before it engages in outsourcing and S. 1457 would limit the Service’s ability to outsource. Service officials say outsourcing is a critical tool to help the Service meet its financial goals but union officials oppose expanded use of outsourcing. Both the Service and unions have indicated that the appropriate way to resolve issues related to outsourcing is through the collective bargaining process. However, most unions have said the Service would not negotiate with them on this issue and have therefore sought congressional intervention. The Service agreed in the most recent collective bargaining process to try and resolve its differences on this issue with the two carrier unions. Legislative and administration initiatives send differing messages to the Service on the scope of its outsourcing effort, from initiatives that support outsourcing as a means to reduce costs, increase efficiency, and improve quality, to initiatives that question the value of outsourcing and propose to curtail it. Support for the Service to operate more like a business dates back at least to 1970, when Congress passed legislation that gave the Service unique status as an independent establishment of the federal government and authorized it to finance its operations through sales of its products and services instead of appropriations. Congress also stated in the 2006 Postal Accountability and Enhancement Act that the Postal Service should implement commercial best practices in its purchasing policies to achieve greater efficiency and cost savings by taking full advantage of private-sector partnerships, as recommended in the July 2003 report by the President’s Commission on the United States Postal Service. Similarly, the current and previous administrations have advanced proposals to promote more efficient and effective government operations, including outsourcing government operations. In particular, in 2001, the Bush Administration launched the President’s Management Agenda to focus attention on ensuring that the resources entrusted to the federal government are well-managed and wisely used. The President’s Management Agenda encourages opening federal commercial activities to competition among public and private sector sources to achieve increased savings and improve performance. These competitions are guided by specific criteria in Office of Management and Budget Circular No. A-76. Although the President’s Management Agenda does not apply to the Service, it was one of many sources of information considered in the Service’s 2002 Transformation Plan. The President’s Commission on the Postal Service further recommended that the Service utilize outsourcing to help it accomplish cost-reduction goals and improve efficiency. However, as previously noted, recent legislative efforts could restrict the Service’s ability to outsource. Bills introduced in 2007, and pending with the appropriate oversight subcommittees, address the Service’s use of outsourced mail delivery contractors. S. 1457 seeks to limit the extent that the Service could use outsourced delivery service, and H.R. 4236 would require the Service to bargain with postal employee unions before entering into certain contracts. Additionally, House Resolution 282, co-sponsored by 255 members as of July 2008, expressed “the sense of the House of Representatives that the United States Postal Service should discontinue the practice of contracting out mail delivery services.” Postal employee unions have questioned whether outsourcing actually achieves its intended results. For example, the president of the Mail Handlers described as false the Service’s assumption that it will save money by allowing private contractors to perform work currently done by postal employees. He further maintained that actual experience, such as with Emery, has shown that outsourcing has the opposite effect, costing the Service more than anticipated. In addition, he said that evaluations by the Service that compare the costs of performing work with contractors and with postal employees are incomplete and do not reflect the actual costs borne by the Service. APWU officials told us the Postal Service primarily uses wage factors as a basis for comparison. Since postal employees are relatively well paid, according to these officials, comparisons tend to find outsourcing to be less costly. Also, the officials noted such comparisons exclude the value of other factors, such as the higher levels of efficiency provided by a well-trained, dedicated career workforce. In addition, unions have expressed skepticism over the value of outsourcing, raising concerns about public policy issues related to the Service’s mission or intended obligations as a government entity. For example, the Service has not clearly defined the functions it considers “inherently postal”–functions that should only be performed by the Service–and whether these functions should be outsourced. All four unions have questioned the wisdom of the Service outsourcing in what they consider its core functional areas, such as delivery or mail processing. Similarly, two unions expressed concerns that there are no limits to the extent the Service could outsource: if it outsources one delivery route, why not outsource all? They also noted that one impact of expanding outsourcing would be to replace long-term career employees with low-paid, no-benefit, non-career, and often transient workers. Finally, the unions have also raised the question of whether the Service, by hiring contractors, violates the public’s trust and expectation of safe, efficient mail delivery or violates its responsibilities as an employer. For example, the president of the Mail Handlers stated in recent testimony that another impact of outsourcing is that the Service, which is currently one of the largest employers of veterans and disabled veterans, would reduce the number of job opportunities for veterans returning from combat and non- combat situations. The current uncertain economic environment serves to exacerbate the challenges facing the Service and contributed to lower than expected mail volumes and revenues in the first half of fiscal year 2008. The Service faces challenges such as generating mail volumes despite rate increases in May 2008; managing its costs in difficult economic conditions, and improving operational efficiencies through accelerated cost reduction strategies; maintaining, measuring, and reporting service; and managing its workforce. In its Transformation Plans and congressional testimony, the Service has acknowledged the value it places on outsourcing as a means to reduce costs and increase efficiency and its intent to continue to pursue outsourcing opportunities. The Service has a long history of outsourcing mail transportation, delivery services and other functions, much of which has been carried out within the framework of the Service’s collective bargaining agreements with its employee unions. Continued or expanded outsourcing by the Service could lead to problems with postal employee unions as evidenced by public statements by union officials. For example, the president of the Mail Handlers testified that continued outsourcing by the Service would drive a wedge between it and hundreds of thousands of postal employees. Postal employees are critical to providing vital postal services to the American people and achieving a successful postal transformation. The President’s Commission on the Postal Service concluded that as valuable as the Postal Service is to the nation, its ability to deliver that value is only as great as the capability, motivation, and satisfaction of the people who make possible the daily delivery of mail to American homes and businesses. However, we and others have reported that adversarial labor-management relations have been a challenge for the Service and its major labor unions. In the past, we reported that autocratic management, persistent confrontation and conflict, and ineffective performance systems often characterized the Service’s organizational culture on the workroom floor. These problems resulted in an underperforming organization with major deficiencies in morale and quality of work life; huge numbers of grievances with high costs for the Service and its employees; and protracted, acrimonious contract negotiations. In our past reports, we found that these conditions have existed over many years because labor and management leadership, at both the national and local levels, have often had difficulty working together to find solutions to their problems. Under these circumstances, it was difficult for the parties to develop and sustain the level of trust necessary for maintaining a constructive working relationship and agreeing on major changes to maximize the Service’s efficiency and the quality of work life. We are encouraged by recent progress in this area, such as reports by union officials of better communications, sharp reductions in the number of outstanding grievances, and three of four major labor contracts that were successfully negotiated between the parties without the need for binding arbitration. In addition, the Service has improved its productivity, which the Service reported has increased in each of the last 8 years. However, continued or expanded outsourcing could be an impediment to improved relationships. Recent actions by the unions continue to indicate significant concerns about outsourcing. For example, in 2007, the Rural Carriers filed a grievance alleging that the Service’s expanded use of contract delivery service violated the terms of its collective bargaining agreement. Similarly, collective bargaining agreement negotiations in 2007 between the Service and the City Carriers initially came to an impasse, primarily because of union concerns over increased outsourcing of delivery services. The parties reached an agreement, which was ratified by the union membership, after the Service agreed to limitations on its ability to outsource in certain areas served by city carriers over the life of the 5- year contract. In addition, the Service and union agreed to establish a joint committee, including the Rural Carriers, to discuss a mutually agreeable approach to the issue of outsourcing by March 2008. Because the committee extended its reporting deadline to the end of September 2008, the results of this effort are not yet known. The Service and employee unions have indicated that the appropriate means for resolving outsourcing issues is through the collective bargaining process. For example, the Postmaster General testified that, “beyond the specific subject of contract delivery, there is a bigger issue at stake. That is the ability of the parties, the Postal Service and its unions, to resolve their differences through the collective bargaining process. One of the most important accomplishments of the Postal Reorganization Act of 1970 was the extension of full collective bargaining rights to the postal unions. Over the course of more than 3 decades, these have served our employees, our unions and the Postal Service well. And, as we have seen, the process can– and does–work.” While most unions agree, they also stated that the Service has not always been willing to discuss outsourcing in collective bargaining negotiations. Some unions have already asked Congress to pass proposed legislation that would require the Service to collectively bargain with the unions on outsourcing proposals before they can be approved or that would limit the extent that the Service could use outsourced delivery service. Although the Service has outsourced activities related to many of its key functions, its employee unions are now challenging its ability to expand outsourcing in areas where postal employees have performed or could perform these activities. The Service views outsourcing as an important strategy for achieving the cost savings it needs to operate successfully under a regulatory price cap. But because the Service does not track, and therefore cannot quantify, the actual results of its outsourcing activities, it cannot document the effectiveness of its outsourcing results for Service managers, stakeholders, and Congress. As a result, information to assess the risk and value of outsourcing and accountability for results is limited. Information on the effectiveness of outsourcing, including actual results, costs, and any savings achieved, could be useful as the Service considers additional outsourcing initiatives. Both the Service and its unions agree that the appropriate way to resolve outsourcing issues is through the collective bargaining process. A key challenge for both the Service and its unions will be to reach agreement on outsourcing issues through collective bargaining. To improve management decision-making and accountability in this area, the Postmaster General should, first, establish a process to measure the results and effectiveness of Service outsourcing activities that are subject to collective bargaining. This process should include tracking actual costs and any savings, and comparing them with estimated costs and savings. Second, to support congressional oversight, the Postmaster General should include information on the results and effectiveness of these ongoing outsourcing activities in its annual operations report (Comprehensive Statement on Postal Operations) to Congress. The U.S. Postal Service provided written comments on a draft of this report in a letter from the Chief Human Resources Officer and Executive Vice President dated July 7, 2008. These comments are summarized below and are included, in their entirety, as appendix III to this report. In separate correspondence, the Service also provided minor technical comments, which we incorporated, as appropriate. The Service generally agreed with our finding that it does not separately track the results of its outsourcing activities and with our first recommendation that a process should be established to measure the results and effectiveness of Service outsourcing initiatives that are subject to collective bargaining agreements, including tracking actual costs and any savings. However, the Service did not agree to implement our second recommendation to provide information on the results and effectiveness of these ongoing outsourcing initiatives in its annual operations report to Congress. The Service agreed to establish a process, for future national-level outsourcing initiatives approved after July 2008, to compare the final financial comparative analysis assumptions with actual contract award data 1 year after project implementation. This step is commendable to begin assessing the impact and effectiveness of outsourcing efforts, specifically the cost savings achieved after 1 year. Our recommendation was not limited to costs and savings associated with outsourcing, but the Service did not commit to measuring other impacts, such as on service, customers, or employees. We continue to believe these are also important in a comprehensive assessment of the outsourcing efforts. The Service did not agree to implement our second recommendation and proposed instead to retain information it collects on its outsourcing efforts internally. However, in order for the Service to effectively make its case to use outsourcing as a mechanism to contain costs, we believe that the Service must keep its stakeholders, including Members of Congress, customers, employees, and the public-at-large, fully informed of the merits, potential impacts, and the results of outsourcing. Several bills are pending before Congress that would affect the Service’s ability to outsource. In conducting oversight and making its decisions, Congress would benefit from more data about the results and effectiveness of the Service’s outsourcing activities that are subject to collective bargaining. Without transparency regarding its outsourcing initiatives, postal management, stakeholders and Congress are not able to assess the risk and value of outsourcing and accountability for results is limited. Thus, we believe that the Service should annually provide Congress with information about the results of outsourcing activities as discussed previously. We are sending copies of this report to the Chairman and Ranking Member of the House Committee on Oversight and Government Reform; the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs; the Postmaster General; and other interested parties. We also will provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions regarding this report, please contact me at [email protected] or by telephone at (202) 512-2834. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix IV. Our objectives for this report were to assess (1) the circumstances under which the Service can outsource postal functions, how it decides to outsource, and the extent to which it has outsourced; (2) how the Service’s management processes, including suitability (hiring and screening procedures) and performance evaluation, for contractors compare to those for postal employees; and (3) the results—including any costs savings, or other outcomes—of the Service’s outsourcing efforts and the challenges facing the Service related to outsourcing. To address the circumstances under which the Service can outsource postal functions, we reviewed statutory and regulatory requirements and agreements with the Service’s employee unions. Specifically, we reviewed applicable statutes and regulations pertaining to the Service, and the Service’s most recent collective bargaining agreements, and accompanying memoranda of understanding, with its four major employee unions—the American Postal Workers Union (APWU), the National Association of Letter Carriers (City Carriers), the National Postal Mail Handlers Union (Mail Handlers), and the National Rural Letter Carriers’ Association (Rural Carriers). In addition, we reviewed information on federal laws and regulations applicable to most federal purchasing, though not required of the Service, such as the Federal Acquisition Regulation, the Competition in Contracting Act, and the Office of Management and Budget’s Circular No. A-76. We focused the scope of our review on the major postal functional areas—transportation, delivery, mail processing, and retail—that involve outsourcing activities related to the bargaining unit work of the Service’s four major unions. To determine how the Service decides to outsource, we interviewed postal officials at Service headquarters and in the Southwest Area, and representatives of employee unions, management associations, and the National Star Route Mail Contractors Association. We reviewed the purchasing procedures the Service uses to guide its purchasing of specific services, including contracted services. We also interviewed the Strategic Initiatives Action Group (SIAG), an internal cross-functional group at headquarters that guides outsourcing through the decision- making process at the national level, and reviewed the group’s policies and procedures. Finally, we discussed with Service officials, and obtained information on, the Growth Management Tool, a new software tool the Service can use in assigning delivery routes, including outsourced routes. To determine the extent of the Service’s outsourcing, we interviewed Service officials at headquarters and in the Southwest Area and obtained and reviewed documents related to the Service’s outsourcing efforts, including 46 national-level outsourcing initiatives since 1996, as well as proposed initiatives. In addition, we obtained, reviewed, and analyzed information on outsourced transportation and delivery, including data pertaining to routes and delivery points for the three types of carriers— city, rural, and contract delivery service. We assessed the reliability of Service data for inconsistencies and determined that the data were sufficiently reliable for the purpose of this report. To compare the Service’s suitability, performance evaluation, and management processes for postal employees and contractors, we reviewed the Service’s policies and procedures on qualifications, screening requirements, and performance evaluation for contractors and postal employees. We reviewed various Service contracts on Air Mail Centers, highway contract routes, and contract delivery service and compared contractual stipulations with applicable postal employee guidelines and collective bargaining agreements by respective occupation. Finally, we spoke with various Service national- and field-level officials about the Service’s suitability, performance evaluation and management processes for postal employees and contractors. To evaluate the results, including costs, savings, and other outcomes related to outsourcing, we discussed with Service officials the processes and procedures currently in place for evaluating outsourcing activities. We also obtained and reviewed available cost estimates, relevant budget information, and available performance data for the outsourcing projects initiated between 1996 and 2007. We conducted this performance audit from August 2007 to July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Singulate and Scan Induction Unit / Optical Character Recognition Program Hub and Spoke (HASP) Expansion/ Surface Transfer Centers (STC) Controls the movement of mail through surface transportation. BMC Transition to RDC Concept (Regional Distribution Center) Automated Tray Handling System (ATHS) Feeder Enhancer De-Stacker Retrofit (FEDR) Flat Identification Code Sort Program (FICS) Ventilation and Filtration Systems (VFS) In addition to the individual named above, Teresa Anderson, Lauren Calhoun, Elizabeth Eisenstadt, Brandon Haller, David Hooper, Brian Howell, Karen Jarzynka, and Travis Thomson made key contributions to this report. | The U.S. Postal Service (the Service) has a long history of contracting out postal functions, such as mail transportation, mail delivery in rural areas, vehicle and equipment maintenance, and retail postal services. However, postal employees also perform many of these same functions and unions representing these employees have concerns about the scope and impact of outsourcing. The objectives of this requested report are to assess (1) the circumstances under which the Service can outsource postal functions, how it decides to outsource, and the extent to which it has outsourced; (2) how the Service's management processes compare for contractors and postal employees; and (3) the results, including any savings, and key challenges related to the Service's outsourcing activities. GAO reviewed applicable statutes, collective bargaining agreements, postal processes and outsourcing data, and interviewed postal union and management officials. The Service has no statutory restrictions on the type of work it may outsource, but collective bargaining agreements with its unions impose some process requirements and limitations. When evaluating outsourcing proposals, the Service must consider five factors--public interest, cost, efficiency, availability of equipment, and qualification of employees--and determine whether outsourcing will have a "significant impact" on work performed by postal employees covered by collective bargaining agreements. If so, it must compare the costs of performing proposed work with postal employees and with a contractor, notify the affected union that it is considering outsourcing, and consider union input before making a decision. We could not determine the Service's total outsourcing contracts related to bargaining unit work, because the Service does not separately track these contracts. It did provide data on some outsourcing that has impacted work by employees of its four major unions in the areas of retail, processing, transportation, and delivery. The Service evaluates contractors and postal employees using similar suitability and performance standards, but uses different management processes. The Service recently revised its drug screening procedures so they are now similar for both groups. The Service manages contractors through specific performance requirements, as compared to Service policies and collective bargaining agreements for postal employees. Finally, the Service has mechanisms to evaluate performance and take actions related to performance problems for both, but does not compile performance data to permit comparisons between contractors and postal employees. The Service does not have a comprehensive mechanism for measuring results, including any actual savings; therefore, it could not provide information on the effectiveness of its outsourcing. Without cost-savings data, postal managers, stakeholders and Congress cannot assess the risk and value of outsourcing. Also, accountability for results is limited. The Service has stated that it will explore outsourcing opportunities, and postal unions are concerned that the Service's use of contractors for delivery service is growing as shown below. Proposed legislation to limit the Service's outsourcing is pending in Congress, which the Service says could limit its ability to contain costs. Key challenges include whether the Service and its unions can reach agreement on outsourcing issues through collective bargaining and whether the Service can provide analysis to substantiate the benefits of outsourcing. |
Developing Area Navigation (RNAV) and Required Navigation Performance (RNP) procedures, often called performance-based navigation procedures, with significant benefits is one way to leverage existing technology in the near term and provide immediate benefits to industry, but developing these procedures expeditiously will be a challenge for FAA. According to the Task Force, developing RNAV and RNP procedures could be a key part of relieving current congestion and delays at major metropolitan airports. Benefits of RNAV and RNP can also include reduced fuel usage, reduced carbon emissions, reduced noise, shorter flights, fewer delays, less congestion, and improved safety. For example, Southwest Airlines demonstration flights show that RNP can reduce fuel burn and carbon dioxide emissions by as much as 6 percent per flight. In 2008, Alaska Airlines estimated that it used RNP procedures 12,308 times and saved 1.5 million gallons of fuel, thereby reducing carbon dioxide emissions by approximately 17,000 metric tons and operating costs by $17 million. Even greater benefits can be realized when the procedures are part of a comprehensive airspace redesign that includes more efficient flight paths, and are not simply overlays of historical aircraft flight paths. Deriving benefits from RNAV and RNP technology depends less on equipping aircraft with the technology required to fly these procedures, than on developing procedures with significant benefits in a timely manner. MITRE Corporation, which collects and retains data on equipage levels for the existing fleet, estimates that for aircraft in commercial operations in 2009, equipage rates are more than 90 percent for RNAV, more than 60 percent for RNP, and more than 40 percent for RNP equipment that allows for higher levels of precision. These figures indicate that the equipment necessary to take advantage of RNAV and RNP technology is already substantially deployed. However, comparatively few procedures have been developed for airlines to use the equipment. Since 2004 FAA has published 305 RNAV procedures, 206 RNAV routes, and 192 RNP approaches, but much remains to be done (see table 1). FAA believes that it can annually develop about 50 RNAV and RNP procedures, 50 RNAV routes, and 50 RNP approaches. At this pace of development, a simple calculation suggests that it would require decades to complete the thousands of procedures currently targeted for development. The Task Force report suggests that FAA and industry create joint teams to focus on performance-based navigations issues at certain locations and to prioritize procedures for development at these locations. Such an effort would likely lead to changes in FAA’s current development targets. Nonetheless, accelerating the development of procedures would require a shift in FAA’s resources, or additional human resources and expertise. In addition to FAA, numerous companies have expertise and experience to develop procedures and are doing this work for air navigation service providers around the world. FAA recognizes the potential benefits of involving these private companies and has taken steps to use them more. FAA recently authorized one such company, Naverus, which has a long history of expertise in procedure development, to validate public and private flight procedures that the company has developed for the U.S. market. This authorization will allow the company to validate performance-based navigation flight procedures from beginning to end. While private sector development may be one way to accelerate procedure development, issues related to FAA’s capacity to approve these procedures remain, according to some stakeholders. In addition, questions such as who can use the procedures and how oversight of third-party developers is to be provided must also be resolved. While FAA tracks the number of navigation procedures completed, stakeholders have told us that developing procedures with significant benefits is more important than developing a specific number of procedures. For example, according to Southwest Airlines, FAA has developed 69 RNP procedures for the routes it flies, 6 which they view as useful to the airline because of the resulting reduction in flight miles or emissions. Some stakeholders have suggested that FAA use other metrics that better capture benefits to industry from advanced procedures, such as fuel savings, time savings, or mileage savings, which could lead to more of a focus on the development of procedures that maximize these benefits. The Task Force report identified the establishment of performance metrics as an important part of following up on and tracking the implementation its recommendations, and we have ongoing work for this committee reviewing FAA’s performance metrics related to this and other aspects of NextGen development. As FAA develops new procedures to make more efficient use of airspace in congested metropolitan areas, it will be challenged to complete the necessary environmental reviews quickly and address local concerns about the development of new procedures and airspace redesign. Anytime an airspace redesign or a new procedure changes the noise footprint around an airport, an environmental review is initiated under the National Environmental Policy Act (NEPA). Under NEPA, varying levels of environmental review must be completed depending on the extent to which FAA deems its actions to have a significant environmental impact. There are three possible levels: 1. Categorical exclusion determination. Under a categorical exclusion, an undertaking may be excluded from a detailed environmental review if it meets certain criteria and a federal agency has previously determined that the undertaking will have no significant environmental impact. 2. Environmental assessment/finding of no significant impact (EA/FONSI). A federal agency prepares a written environmental assessment (EA) to determine whether or not a federal undertaking would significantly affect the environment. If the answer is no, the agency issues a finding of no significant impact (FONSI). 3. Environmental impact statement (EIS). If the agency determines while preparing the EA that the environmental consequences of a proposed federal undertaking may be significant, an EIS is prepared. An EIS is a more detailed evaluation of the proposed action and alternatives. The more extensive the analysis required, the longer the process can take. A full EIS can take several years to complete. EAs and categorical exclusions, by contrast, take less time and resources to complete. Because NEPA does not allow consideration of the net impact of an action such as the introduction of new procedures or broader airspace redesign—which may increase noise in some areas but increase capacity at an airport and reduce noise and emissions overall—these actions can often result in extensive and time-consuming reviews. FAA is exploring situations in which it might be more appropriate to use a categorical exclusion or an EA instead of an EIS. The 2009 FAA reauthorization legislation includes language that may expedite the environmental review process. For example, the legislative proposal would allow airport operators to use grant funds for environmental reviews of proposals to implement flight procedures. The proposal would also allow project sponsors to provide FAA with funds to hire additional staff as necessary to expedite completion of the environmental review necessary to implement flight procedures. According to stakeholders and Task Force members, and as we have previously reported, FAA faces organizational and cultural challenges in implementing NextGen operational capabilities. FAA has traditionally developed and acquired new systems through its acquisition process. However, most NextGen technologies and capabilities, such as Automatic Dependent Surveillance Broadcast (ADS-B), rely on components in the aircraft, on the ground, and in space for their use. They also require controllers and pilots to be trained and flight procedures to be developed in order to maximize their benefits. Different offices within FAA— including its Aircraft Certification Service, Flight Standards Service, and Air Traffic Organization (ATO), among others—are responsible for ensuring the completion of all the activities required to maximize the use of a technology or capability. While FAA has recently made organizational changes to address integration issues, several stakeholders told us, and our previous and ongoing work suggests, that FAA’s structure and culture continues to hamper its ability to ensure that all the actions necessary to maximize use of a technology or capability in the national airspace system are completed efficiently. For example, stakeholders identified coordination and integration as particular challenges to implementing operational capabilities in the surface operations area identified by the Task Force. Implementing capabilities in this area will require greater coordination among offices within ATO, airport operators, pilots, and controllers, among others. While many of the operational improvements identified by the Task Force align with FAA’s current plans, a senior FAA official indicated that in several instances, FAA may need to adjust its plans, budgets, and priorities as it decides how it will respond to the Task Force’s recommendations. According to this senior FAA official, potential budgetary changes are already being identified, and a comprehensive analysis of what additional changes to existing plans would be necessary to respond to the recommendations is underway. Until this analysis is completed, it is difficult to know exactly what changes FAA would need to make to implement the Task Force’s recommendations. In some cases, the Task Force’s recommendations, if accepted and fully implemented, will require altering the course of initiatives that are already underway or programs that are being implemented. For example, a recommendation to expand surveillance of airspace around certain general aviation airports may require an increase in the scope of the current ADS-B program, which does not cover those areas. In addition, recommendations to expand information sharing to improve surface situational awareness and traffic management could affect the current plans for FAA programs such as System-Wide Information Management (SWIM), according to one stakeholder. Responding to the Task Force’s recommendations will require a willingness to change and reprioritize current plans and programs. Inefficiencies in FAA’s certification, operational approval, and procedure design processes constitute another challenge to delivering near-term benefits to stakeholders, instilling confidence in FAA plans, and investing in new equipment. Our prior work has identified this issue and concluded that the time required to complete such activities will have to be balanced against the need to ensure reliability and safety of procedures and systems before they are used in the national airspace system. Stakeholders, including airlines and general aviation groups, including one that represents avionics manufacturers, as well as the Task Force, have said that these processes take too long and impose costs on industry that discourage the stakeholders from investing in NextGen aircraft equipment. For example, the President of GE Aviation Systems recently testified, and other stakeholders have told us, that the process of approving and deploying RNP navigation procedures remains extremely slow and that FAA’s review and approval of a given original RNP design often takes years. A 1999 RTCA task force also identified a need to streamline the certification and operational approval processes and made a number of recommendations to FAA. According to a senior FAA official, while FAA has made progress in addressing many of these recommendations, it has yet to take action on others and some challenges remain. For example, the NextGen Task Force reports that FAA aircraft certification offices face resource issues and applicants for many required installation approvals wait about 6 months until FAA engineers are available to oversee their project. Other suggestions to streamline the equipment certification process include increasing staffing at FAA’s certification offices to process applications and having NextGen-specific equipment certification processes that allow quicker approvals of equipment. Another challenge for FAA will be to continue involving stakeholders-— including industry and controllers, as well as others as appropriate—in implementation and key decisions related to the Task Force’s recommendations. The Task Force recommends, and we agree, that FAA and industry establish institutional mechanisms to facilitate continued transparency and collaboration in planning and implementing actions to address the Task Force’s recommendations, particularly as these actions lead to changes in the NextGen Implementation Plan. The Task Force recommended the creation of a NextGen Implementation Workgroup under the RTCA Air Traffic Management Advisory Committee (ATMAC). An FAA official indicated that several mechanisms, including a variety of advisory boards and working groups, currently exist and can also be used to improve collaboration among stakeholders. We have previously reported that the roles of these various groups have become somewhat unclear, even to stakeholders involved in them. FAA will need to work with industry and key stakeholders to come to agreement on how, where, and when stakeholders will be involved. Continued transparency and collaboration are key to developing industry’s trust that FAA is making changes to implement NextGen. In addition, FAA will need to continue to work toward changing the nature of its relationship with controllers and the controllers’ union to create more effective engagement and collaboration. In September 2009, FAA and NATCA signed a new 3-year contract. FAA views the new contract as a framework for helping meet the challenges of implementing NextGen. NATCA states that the contract starts a process to discuss ways for getting NATCA representatives involved in all NextGen-related issues. One particular change that would affect the relationship between controllers and FAA, as well as facilitate NextGen’s implementation, would be to modify the incentives that influence how controllers apply FAA’s aircraft separation standards. More specifically, a change that encouraged controllers to decrease the separation between aircraft during landing or takeoff would improve system capacity and efficiency and was one of the Task Force’s overarching recommendations. Currently, according to NATCA, controllers are encouraged to increase the separation between aircraft, because they are penalized if separation thresholds are crossed. Moreover, according to MITRE, controllers often separate aircraft by more than the prescribed minimum distances to address any uncertainty about the actual positions of aircraft as well as to reduce the likelihood of violating the required separation distances. NextGen technologies and procedures can provide controllers with more precise information about the locations of aircraft and allow for aircraft to operate closer to one another. Recent changes to the Operational Error program and the Air Traffic Safety Action Program (ATSAP) program are aimed at establishing a nonpunitive safety reporting program and are a positive first step towards changing the culture and establishing a more collaborative relationship with controllers. The Task Force’s focus was on making better use of the equipment that has already been installed or is available for installation. However, as NextGen progresses and as the Task Force’s recommendations are implemented, operators will need to acquire additional equipment to take full advantage of the benefits of NextGen. In some cases the federal government may deem financial or other incentives desirable to speed the deployment of new equipment. Appropriate incentives will depend on the technology and the potential for an adequate and timely return on investment. A discussion of options to accelerate equipage discussed in our prior work and identified by the Task Force follows. The first option is mandating the installation of equipment. Traditionally, FAA mandates the equipage of aircraft for safety improvements and provides several years for operators to comply. According to academic researchers, among these mandated safety improvements are ground proximity warning sensors, extended ground proximity warning sensors, and traffic collision and avoidance systems. Mandates can be effective because they force operators to equip even when there may not be clear and timely benefits to operators that justify the cost of equipping. In the NextGen context, FAA has proposed a rule that mandates equipage with ADS-B Out for affected aircraft by 2020. However, operators may not equip until the deadline for compliance is near because the cost of early investment in new technologies is often high and the return on investment limited. This is particularly true for general aviation operators who typically do not fly enough to recoup a large investment in new aircraft equipment. According to a general aviation stakeholder, general aviation operators typically fly hundreds of flight hours a year, while scheduled airlines fly thousands a year. Our prior work has identified a variety of other disincentives to early investment. These disincentives include the possibility that a technology may not work as intended, may not provide any operational benefits until a certain percentage of all aircraft are equipped, or may become obsolete because a better technology is available. Other risks to early investors include potential changes in the proposed standards or requirements for the technology, later reductions in the price of technologies and installations, or the risk that FAA may not implement the requisite ground infrastructure and procedures to provide operators with benefits that would justify their costs to equip. Moreover, because equipage mandates are designed to cover a broad range of users in a single action, they may lead to objections and lobbying from users, such as general aviation operators, on whom significant costs are imposed. A second option to accelerate equipage is to develop operational improvements that make use of equipment that is already widely deployed to produce benefits for operators to justify the costs of equipage. The Task Force’s recommendations are geared toward this option. A large part of the fleet is equipped with technologies that operators cannot fully use until FAA has implemented operational improvements. If FAA can implement such improvements for operators that have this equipment, it could provide a return on investment for them and create a financial incentive for others to equip. But because FAA has not always taken the actions needed for operators to take full advantage of investments in equipage, such as for Controller Pilot Data Link Communications, some industry stakeholders question whether FAA will now follow through with the tasks required to allow operators to achieve the full benefit of their investment in a timely manner. Early success in implementing some of the Task Force’s near-term recommendations will help build trust between FAA and operators that FAA will provide operational improvements that allow operators to take advantage of the required equipment and realize benefits. A third option proposed by FAA and known as “best equipped, best served” requires that FAA ensure some form of operational benefit for operators that do equip, such as preferred airspace, routings, or runway access, which can save time or fuel. If early equippers get a clear competitive advantage, other operators may be encouraged to follow their example, providing further incentive for all operators to fully equip their fleets. An advantage of pursuing this option is that no federal financial incentives are required for equipage, so costs to the federal government are generally lower. However, designing such incentives and analyzing how they will work in practice is a major challenge and has only begun to move forward. For example, giving a better-equipped aircraft preference over lesser-equipped aircraft to land or depart may increase delays and holding patterns for the lesser-equipped aircraft, potentially increasing delays and fuel usage overall, and resulting in lower systemwide benefits. Furthermore, according to airline stakeholders, the best equipped, best served option will require controllers to accept procedures that they have expressed safety concerns about in the past. Mechanisms will also have to be created so that controllers know which aircraft are best equipped, and these mechanisms cannot adversely affect controller workload or safety. The Task Force’s report does not address the practical implications of how a best equipped, best served option would work, but recommends that the option be explored in the context of specific operational capabilities and locations. A fourth option is to provide financial incentives where operators do not have a clear and timely return on investment for equipping aircraft. Financial incentives can accelerate investment in equipment, which, in turn, can accelerate the operational and public benefits expected from implementing additional capabilities. According to the Commission on the Future of the United States Aerospace Industry, one argument for some form of federal financial assistance is that the total cost to the federal government of fully financing the communication, navigation, and other airborne equipment required for more efficient operations would be less than the costs to the economy of system delays and inefficiencies that new equipment would help address. In previous work, we concluded that the federal government’s sharing of costs is most justifiable when there are adequate aggregate net benefits to be realized through equipage, but those who need to make the investments in the equipment do not accrue enough benefits themselves to justify their individual investments. Financial assistance can come in a variety of forms including grants, cost- sharing arrangements, loans, and tax incentives. As we have previously reported, prudent use of taxpayer dollars is always important; therefore, financial incentives should be applied carefully and in accordance with key principles. For example, mechanisms for financial assistance should be designed so as to effectively target parts of the fleet and geographical locations where benefits are deemed to be greatest, avoid unnecessarily equipping aircraft (e.g., those that are about to be retired), and not displace private investment that would otherwise occur. Furthermore, it is preferable that the mechanism used for federal financial assistance result in minimizing the use of government resources (e.g., some mechanisms may cost the government more to implement or place the government at greater risk than others). We also reported that, of the various forms of assistance available to the federal government, tax incentives have several disadvantages because (1) many scheduled airlines may not have any tax liability that tax credits could be used immediately to offset, (2) a tax credit would provide a more valuable subsidy for carriers that are currently profitable than for those that are not, and (3) using the tax system to provide a financial incentive can impose an administrative burden on the Internal Revenue Service. One financing option proposed by the Task Force to encourage the purchase of aircraft equipment is the use of equipage banks, which provide federal loans to operators to equip their aircraft. Recent legislation proposes that FAA establish a pilot program that would permit the agency to work with up to five states to establish ADS-B equipage banks for making loans to help facilitate aircraft equipage locally. The Task Force suggests that equipage banks could be used to provide funds for operators to equip with a NextGen technology when there may not be a benefit or return on investment for doing so. By providing for a variety of NextGen technologies, an equipage bank can avoid penalizing those who have already invested in a particular NextGen technology. The federal government has used a similar financing option in the past to fund other infrastructure projects including highway improvements. Thank you Mr. Chairman. This concludes my prepared statement. I would be pleased to answer any questions that you or Members of the Subcommittee may have at this time. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Andrew Von Ah (Assistant Director), Amy Abramowitz, Kieran McCarthy, Kevin Egan, Bess Eisenstadt, and Bert Japikse. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | On September 9, 2009, the Next Generation Air Transportation System (NextGen) Midterm Implementation Task Force (Task Force) issued its final report and recommendations. The Task Force was to reach a consensus on the operational improvements to the air transportation system that should be implemented between now and 2018. Its recommendations call for the Federal Aviation Administration (FAA) to develop improvements that allow operators to take advantage of equipment that has been widely deployed or is available for installation in existing aircraft. FAA is now considering how to modify its existing plans and programs in response to the Task Force's recommendations and must do so in a way that retains safety as the highest priority. This testimony highlights the NextGen challenges previously identified by GAO and others that affect FAA's response to the Task Force's recommendations. GAO groups these challenges into three areas: (1) directing resources and addressing environmental issues, (2) adjusting its culture and business practices, and (3) developing and implementing options to encourage airlines and general aviation to equip aircraft with new technologies. GAO's testimony updates prior GAO work with interviews with agency officials and industry stakeholders and includes an analysis of the Task Force report. Directing resources and addressing environmental issues. Allocating resources for advanced navigational procedures and airspace redesign requires FAA to balance benefits to operators against resource limits and other challenges to the timely implementation of NextGen. Procedures that allow more direct flights--versus those that overlay existing routes--and redesigned airspace in congested metropolitan areas can save operators time, fuel, and costs, and reduce congestion, delays, and emissions. However, FAA does not have the capacity to expedite progress towards its current procedure development targets. While FAA has begun to explore the use of the private sector to help develop procedures, issues related to public use of these procedures and oversight of developers remain. In addition, required environmental reviews can be lengthy, especially when planned changes in noise patterns create community concerns during reviews. Challenges to FAA include deciding whether to start in more or less complex metropolitan areas, and finding ways to expedite the environmental review process and proactively ameliorate community concerns. Changing FAA's culture and business practices. According to stakeholders and Task Force members, and as GAO has previously reported, FAA faces cultural and organizational challenges in implementing NextGen capabilities. Whereas FAA's culture and organization formerly supported the acquisition of individual air traffic control systems, FAA will now have to integrate and coordinate activities across multiple lines of business, as well as reprioritize some of its plans and programs, to implement near-term and midterm capabilities. FAA is currently analyzing what changes may be required to respond to the recommendations. StreamliningFAA's certification, operational approval, and procedure design processes, as a prior task force recommended, will also be essential for timely implementation. And sustaining a high level of involvement and collaboration with stakeholders--including operators, air traffic controllers, and others--will also be necessary to ensure progress. Developing and implementing options to encourage equipage. The Task Force focused on making better use of equipment that has already been widely deployed in aircraft, but as NextGen progresses, new equipment will have to be installed to implement future capabilities and FAA may have to offer incentives for operators to accelerate their installation of equipment that may not yield an immediate return on investment. While FAA could mandate equipage, mandates take time to implement and can impose costs, risks, and other disincentives on operators that discourage early investment in equipment. The Task Force identified several options to encourage equipage, including offering operational or financial benefits to early equippers. Challenges to implementing these options include defining how operational incentives would work in practice, designing financial incentives so as not to displace private investment that would otherwise occur, and targeting incentives where benefits are greatest. |
The JSF is a joint, multinational acquisition to develop and field an affordable, highly common family of next generation strike fighter aircraft for the United States Air Force, Navy, Marine Corps, and eight international partners. The JSF is a single-seat, single engine aircraft incorporating low-observable (stealth) technologies, defensive avionics, advanced sensor fusion, internal and external weapons, and advanced prognostic maintenance capability. There are three variants. The conventional takeoff and landing (CTOL) variant will be an air-to-ground replacement for the Air Force’s F-16 Falcon and the A-10 Thunderbolt II aircraft, and will complement the F-22A Raptor. The STOVL variant will be a multi-role strike fighter to replace the Marine Corps’ F/A-18C/D Hornet and AV-8B Harrier aircraft. The carrier-suitable variant (CV) will provide the Navy a multi-role, stealthy strike aircraft to complement the F/A-18 E/F Super Hornet. DOD began the JSF program in October 2001 with a highly concurrent, aggressive acquisition strategy with substantial overlap between development, testing, and production. The program was replanned in 2004 following weight and performance problems and rebaselined in 2007 due to cost growth and schedule slips. In February 2010, the Secretary of Defense announced another comprehensive restructuring of the program due to poor outcomes and continuing problems. This restructuring followed an extensive Department-wide review which included three independent groups chartered to evaluate program execution and resources, manufacturing processes and plans, and engine costs and affordability initiatives. DOD provided additional resources for testing–funding, time, and flight test assets–and reduced near-term procurement by 122 aircraft. As a result of the additional funding needed and recognition of higher unit procurement costs, in March 2010 the Department declared that the program experienced a Nunn-McCurdy breach of the critical cost growth statutory threshold and subsequently certified to the Congress in June 2010 that the JSF program should continue. The program’s approval to enter system development was rescinded and efforts commenced to establish a new acquisition program baseline. The new JSF program executive officer subsequently led a comprehensive technical baseline review. In January 2011, the Secretary of Defense announced additional development cost increases, further delays, and cut another 124 aircraft through fiscal year 2016. Restructuring continued throughout 2011 and into 2012, adding to costs and extending the schedules for achieving key activities. The Department’s restructuring actions have helped reduce near-term risks by lowering annual procurement quantities and allowing more time and resources for flight testing. In late March 2012, the Department established a new acquisition program baseline and approved the continuation of system development. These decisions, critical for program management and oversight, had been delayed several times and came 2 years after the Department alerted the Congress that the program experienced a breach of the Nunn- McCurdy critical cost growth threshold and thus require a new milestone approval for system development and a new acquisition program baseline. The new JSF baseline projects a total acquisition cost of $395.7 billion, an increase of $117.2 billion (42 percent) from the prior 2007 baseline. Table 1 shows changes in cost, quantity, and schedule since the start of system development (2001), a major redesign (2004), a revised baseline following the program’s Nunn-McCurdy breach of the significant cost growth statutory threshold (2007), initial restructuring actions after the Nunn-McCurdy breach of the critical cost growth statutory threshold (2010), and the new acquisition program baseline (2012). Full rate production is now planned for 2019, a delay of 6 years from the 2007 baseline. Unit cost estimates continue to increase and have now doubled since the start of development. Initial operational capability dates for the Air Force, Navy and Marines—the critical dates when the warfighter expects the capability promised by the acquisition program to be available—have slipped over time and are now unsettled. The fiscal year 2013 defense budget request and five-year plan supports the new approved baseline. Compared to the fiscal year 2012 budget plan for the same time period, the 2013 budget plan identifies $369 million more for JSF development and testing and $14.2 billion less in procurement funding for fiscal years 2012 through 2016. Procurement funding reflects the reduction of 179 aircraft in annual procurement quantities from fiscal year 2013 to fiscal year 2017. Appendix IV summarizes the new budget’s development and procurement funding requests and aircraft quantities for each service. Taken as a whole, the Department’s restructuring actions have helped reduce near term acquisition risks by lowering annual procurement quantities and allowing more time and resources for flight testing. However, continuing uncertainties about the program and frequently changing prognoses make it difficult for the United States and international partners to confidently commit to future budgets and procurement schedules, while finalizing related plans for basing JSF aircraft, developing a support infrastructure, and determining force and retirement schedules for legacy aircraft. Over the long haul, affordability is a key challenge. Projected annual acquisition funding needs average more than $12.5 billion through 2037 and life-cycle operating and support costs are estimated at $1.1 trillion. The new baseline increased cost and extended the schedule for completing system development. Development is now expected to cost $55.2 billion, an increase of $10.4 billion (23 percent) from the 2007 baseline. About 80 percent of these funds have been appropriated through fiscal year 2011. System development funding is now required through fiscal year 2018, 5 more years than the 2007 baseline. Figures 1 and 2 track cost increases and major events regarding the aircraft and engine development contracts, respectively. The new baseline includes $335.7 billion in procurement funding, an increase of $104 billion (45 percent) compared to the 2007 baseline. About 6 percent of this total funding requirement has been appropriated through fiscal year 2011. Concerned about concurrency risks, DOD, in the fiscal year 2013 budget request, reduced planned procurement quantities through fiscal year 2017 by 179 aircraft. This marked the third time in as many years that near-term procurement quantities had been reduced. Combined with other changes since the 2007 revised baseline, total JSF procurement quantity has been reduced by 410 aircraft through fiscal year 2017. Since the department still plans to eventually acquire the full complement of U.S. aircraft—2,443 production jets—the procurement costs, fielding schedules, and support requirements for the deferred aircraft will be incurred in future years beyond 2017. The new plan also stretches the period of planned procurement another two years to 2037. Figure 3 shows how planned quantities in the near-term have steadily declined over time. With the latest reduction, the program now plans to procure a total of 365 aircraft through 2017, about one-fourth of the 1,591 aircraft expected in the 2002 plan. The ramp rate (annual increases in quantities) for the early production years has been significantly flattened over time. Reducing near-term procurement quantities lowers concurrency risks because fewer aircraft are produced that may later need to be modified to correct problems discovered during testing. However, it also means that the number of aircraft and associated capabilities that the program committed to provide the warfighter will be delivered years later than planned. Overall program affordability—both in terms of the investment costs to acquire the JSF and the continuing costs to operate and maintain it over the life-cycle—remains a major challenge. As shown in figure 4, the annual funding requirements average more than $12.5 billion through 2037 and average more than $15 billion annually in the 10-year period from fiscal years 2019 through 2028. The Air Force alone needs to budget from about $6 to $11 billion per year from fiscal year 2016 through 2037 for procurement of JSF aircraft. At the same time, the Air Force is committed to other big-dollar projects such as the KC-46 tanker and a new bomber program. The long-stated intent that the JSF program would deliver an affordable, highly common fifth generation aircraft that could be acquired in large numbers is at risk. Continued increases in aircraft prices erode buying power and may make it difficult for the U.S. and international partners to buy as many aircraft as planned and to do so within the intended timeframe. As the JSF program moves forward, unprecedented levels of funding will be required during a period of more constrained defense funding expectations overall. If future funding is not available at these projected levels, the impacts on unit costs and program viability are unclear. Program officials have not reported on potential impacts from lowered levels of funding. In addition to the costs for acquiring aircraft, significant concerns and questions persist regarding the costs to operate and sustain JSF fleets over the coming decades. The most recent estimate projects total United States operating and support costs of $1.1 trillion for all three variants based on a 30-year service life and predicted usage and attrition rates. Defense leadership stated in 2011 that sustainment cost estimates at this time were unaffordable and simply unacceptable in the current fiscal environment. In March 2012, the Department established affordability targets for sustainment as well as production. The sustainment affordability target for the Air Force’s CTOL ($35,200 per flight hour) is much higher than the current cost for the F-16 it will replace ($22,500 per flight hour, both expressed in fiscal year 2012 dollars). Comparative data for the Navy’s CV and Marine Corps’ STOVL with the legacy aircraft to be replaced was not available. Program officials noted that there are substantive differences between legacy and F-35 operating and funding assumptions which complicate direct cost comparisons. The program has undertaken efforts to address this life-cycle affordability concern. However, until DOD can demonstrate that the program can perform against its cost projections, it will continue to be difficult for the United States and international partners to accurately set priorities, establish affordable procurement rates, retire aged aircraft, and establish supporting infrastructure. Much of the instability in the JSF program has been and continues to be the result of highly concurrent development, testing, and production activities. During 2011, overall performance was mixed as the program achieved 6 of 11 primary objectives for the year. Developmental flight testing gained momentum and had tangible success, but it has a long road ahead with testing of the most complex software and advanced capabilities still in the future. JSF software development is one of the largest and most complex projects in DOD history, providing essential capability, but software has grown in size and complexity, and is taking longer to complete than expected. Developing, testing, and integrating software, mission systems, and logistics systems are critical for demonstrating the operational effectiveness and suitability of a fully integrated, capable aircraft and pose significant technical risks moving forward. Until a fully integrated, capable aircraft is flight tested–planned to start in 2015–the program is still very susceptible to discovering costly design and technical problems after many aircraft have been fielded. The JSF program achieved 6 of 11 primary objectives it established for 2011. Five of the objectives were specific test and training actions tied to contractual expectations and award fees, according to program officials. The other 6 objectives were associated with cost, schedule, contract negotiations, and sustainment. The program successfully met 2 important test objectives: the Marine Corps’ short takeoff and vertical landing (STOVL) variant accomplished sea trials and the Navy’s carrier variant (CV) completed static structural testing. Two other test objectives were not met: software was not released to flight test in time and the carrier variant did not demonstrate shipboard suitability because of problems with the tail hook arrestment system. The program also successfully completed objectives related to sustainment design reviews, schedule data, manufacturing processes, and cost control, but did not meet a training deadline or complete contract negotiations. Table 2 summarizes the 2011 objectives and accomplishments. Development flight testing gained momentum and met or exceeded most objectives in its modified test plan for 2011. The program accomplished 972 test flights in 2011, more than double the flights in 2010. Final deliveries of the remaining test aircraft were made in 2011 (with the exception of one carrier variant added in restructuring and expected in 2012) and five production aircraft have been made available to the test program. Flight test points accomplished in 2011 exceeded the plan overall, as shown in figure 5. CTOL flight test points achieved fell short of the plan, due to operating limitations and aircraft reliability. The program successfully accomplished 65 catapult launches, but problems with the arresting hook prevented successful engagement with the cable during ground testing. Analysis of test results discovered tail hook design issues that have major consequences, according to DOD officials. The tail hook point is being redesigned and other aircraft structural modifications may also be required. The program must have fixes in place and deficiencies resolved in order to accomplish CV ship trials in late 2013. Since the carrier variant has just started initial carrier suitability tests, the proposed design changes will not be demonstrated until much later in developmental testing and could require significant structural changes to already-delivered aircraft. According to officials from the office of the Director, Operational Test and Evaluation (DOT&E), the program is also working to correct a number of other carrier variant performance problems such as excessive nose gear oscillations during taxi operations, excessive landing gear retraction times, and overheating of the electro-hydrostatic actuator systems that power flight controls. The program has not yet determined if production aircraft will need to be modified to address these issues. Air Force’s Conventional Takeoff and Landing Variant: The JSF test team flew the planned number of CTOL flights in 2011 but achieved about 10 percent fewer flight sciences test points than planned. Aircraft operating limitations and inadequate instrumentation impacted the ability to complete the planned number of test points. Contributing factors included deficiencies in the air vehicle’s air data system as well as in-flight data indicating different structural loads than predicted. Aircraft reliability and parts shortages also affected the number of CTOL flight tests. Marine Corps’s Short Take Off and Vertical Landing Variant: The STOVL variant performed better than expected in flight tests during 2011. It increased flight test rates and STOVL-specific mode testing, surpassing planned test point progress for the year. Following reliability problems and performance issues, the Secretary of Defense in January 2011 had placed the STOVL on “probation” for up to two years, citing technical issues unique to the variant that would add to the aircraft’s cost and weight. In January 2012, the Secretary of Defense lifted the STOVL probation after one year, citing improved performance and completion of the initial sea trials as a basis for the decision. The Department concluded that STOVL development, test, and production maturity is now comparable to the other two variants. While several technical issues have been addressed and some potential solutions engineered, assessing whether the deficiencies are resolved is ongoing and, in some cases, will not be known for years. According to the program office, two of the five specific problems cited are considered to be fixed while the other three have temporary fixes in place. (See Appendix V which provides a more detailed examination of the STOVL probation, deficiencies addressed, and plans for correcting deficiencies.) DOT&E officials reported that significant work remains to verify and incorporate modifications to correct known STOVL deficiencies and prepare the system for operational use. Until the proposed technical solutions have been fully tested and demonstrated, it cannot be determined if the technical problems have been resolved. Even with the progress in 2011, most development flight testing, including the most challenging, still lies ahead. Through 2011, the flight test program had completed 21 percent of the nearly 60,000 planned flight test points estimated for the entire program. Program officials reported that flight tests to date have largely demonstrated air worthiness, flying qualities, and initial speed, altitude, and maneuvering performance requirements. According to JSF test officials, the more complex testing such as low altitude flight operations, weapons and mission systems integration, and high angle of attack has yet to be done for any variant and may result in new discoveries of aircraft deficiencies. Initial development flight tests of a fully integrated, capable JSF aircraft to demonstrate full mission systems capabilities, weapons delivery, and autonomic logistics is not expected until 2015 at the earliest. This will be critical for verifying that the JSF aircraft will work as intended and for demonstrating that the design is not likely to need costly changes. Development flight testing in a production-representative test aircraft and in the operational flight environment planned for the JSF is important to reducing risk. This actual environment differs from what can be demonstrated in the laboratory and has historically identified unexpected problems. For example, the F-22A fighter software worked as expected in the laboratory, but significant problems were identified in flight tests. These problems delayed testing and the delivery of a proven capability to the warfighter. Like other major weapon systems acquisitions, the JSF will be susceptible to discovering costly problems later in development when the more complex software and advanced capabilities are integrated and flight tested. With most development flight testing still to go, the program can expect more changes to aircraft design and continued alterations of manufacturing processes. Initial dedicated operational testing of a fully integrated and capable JSF is scheduled to begin in 2017. Initial operational testing is important for evaluating the effectiveness and suitability of the JSF in an operationally realistic environment. It is a prerequisite for JSF full-rate production decision in 2019. The JSF operational test team assessed system readiness for initial operational testing and identified several outstanding risk items. The test team’s operational assessment concluded that the JSF is not on track to meet operational effectiveness or operational suitability requirements. The test team’s October 2011 report identified deficiencies in the helmet mounted display, night vision capability, aircraft handling characteristics, and shortfalls in maneuvering performance. Test officials also reported an inadequate logistics system for deployments, excessive time for low observable repair and restoration, low reliability, and poor maintainability performance. The team’s report noted that many of the concerns that drive the program’s readiness for operational test and evaluation are also critical path items to meet effectiveness and suitability requirements. In its 2011 annual report, DOT&E reported many challenges for the JSF program due to the high level of concurrency of production, development, and test activities. Flight training efforts were delayed because of immature aircraft. Durability testing identified structural modifications needed for production aircraft to meet service life and operational requirements. Analysis of the bulkhead crack problem revealed numerous other life-limited parts on all three variants. According to DOT&E’s report, the most significant of these deficiencies in terms of complexity, aircraft downtime, and difficulty in modification required for existing aircraft is the forward wing root rib which experienced cracking during CTOL durability testing. STOVL variant aircraft are also affected. Production aircraft in the first four lots (63 aircraft) will need the modification before these aircraft reach their forward root rib operating limits, which program officials identified as 574 flight hours for the CTOL and 750 hours for the STOVL. DOT&E also found that, although it is early in the program, current reliability and maintainability data indicate that more attention is needed in these areas to achieve an operationally suitable system. Its report also highlighted several discoveries which included deficiencies in the helmet mounted display, STOVL door and propulsion problems, limited progress in demonstrating mission systems capabilities, and challenges in managing weight growth. Software providing essential JSF capability has grown in size and complexity, and is taking longer to complete than expected. Late releases of software have delayed testing and training and added costs. Some capabilities have been deferred until later in development in order to maintain schedule. The lines of code necessary for the JSF’s capabilities have now grown to over 24 million—9.5 million on-board the aircraft. (By comparison, JSF has about 3 times more on-board software lines of code than the F-22A Raptor and 6 times more than the F/A-18 E/F Super Hornet.) This has added work and increased the overall complexity of the effort. The software on-board the aircraft and needed for operations has grown 37 percent since the critical design review in 2005. While software growth appears to be stabilizing, contractor officials report that almost half of the on-board software has yet to complete integration and test—typically the most challenging phase of software development. JSF software growth is not much different than other recent defense acquisitions, which have experienced from 30 to 100 percent growth in software code over time. However, the sheer number of lines of code for the JSF makes the growth a notable cost and schedule challenge. Figure 6 shows increased lines of code for both airborne and ground systems. JSF software capabilities are developed, integrated, tested, and delivered to aircraft in 5 increments or blocks. Software defects, low productivity, and concurrent development of successive blocks have created inefficiencies, taking longer to fix defects and delaying the demonstration of critical capabilities. Delays in developing, integrating, and releasing software to the test program have cascading effects hampering flight tests, training, and test lab accreditation. While progress has been made, a substantial amount of software work remains before the program can demonstrate full warfighting capability. Block 0.1, providing flight science capabilities for test aircraft, was released about six months late and block 0.5, providing basic flight systems, was almost two years late, due largely to integration problems. Status of the other 3 blocks follows: Block 1.0 provides initial training capability and was released to flight test three years late when compared to the 2006 plan. More recently, it began flight test three months late based on the new plan, and was delayed by defects, workload bottlenecks, and security approvals. Late delivery of block 1.0 to training resulted in the program missing one of its key goals for 2011. Block 1.0 was planned to complete testing and be delivered to training in 2011. Full block 1.0 flight testing was only 25 percent complete at that time and fewer than half of the final block 1.0 capabilities (12 of 35) had met full contract verification requirements for aircraft delivery, according to officials. provides initial warfighting capability, including weapons employment, electronic attack, and interoperability. Its full release to testing is now expected in late 2013, over three years later than planned in 2006. Development has fallen behind due to integration challenges and the reallocation of resources to fix block 1.0. As of December 2011, block 2.0 has completed only half of the planned schedule, leaving approximately 70 percent of integration work to complete. provides the full capability required by the warfighter, including full sensor fusion and additional weapons. In its early stage, development and integration is slightly behind schedule with 30 percent of initial block 3.0 having completed the development phase. These challenges will continue as the program develops, integrates, and tests the increasingly complex mission systems software work that lies ahead. To maintain schedule, the program has deferred some capabilities to later blocks. For example, initial air to ground capabilities were deferred from block 1.0 to 2.0, and several data fusion elements moved from block 2.0 to 3.0. Deferring tasks to later phases of the development program adds more pressure and costs to future software management efforts. It also likely increases the probability of defects being realized later in the program when the more complex capabilities in these later blocks are already expected to be a substantial technical challenge. Recently, some weapons were moved earlier in the plan, from block 3.0 to 2.0, to provide more combat capability in earlier production aircraft. Because software is critical to the delivery of war fighter capabilities and presents complex cost, schedule and performance challenges, we recommended in our April 2011 report that an independent review of software development, integration, and testing–similar to the review of manufacturing processes–be undertaken. An initial contractor study was recently completed that focused on mission systems’ staffing, development, defects, and rework. Program officials are currently implementing several improvement initiatives and plan to broaden the assessment to off-board software development efforts including logistics and training. JSF’s mission systemsoperational and support capabilities expected by the warfighter, but the hardware and software for these systems are immature and unproven at this time. For example, only 4 percent of mission systems requirements planned in system development and demonstration have been verified. Significant learning and development remains before the program can demonstrate mature mission systems software and hardware, not expected until block 3.0 is delivered in 2015. The program has experienced significant challenges developing and integrating mission systems software. Mission systems hardware has also experienced several technical challenges, including problems with the radar, integrated processor, communication and navigation equipment, and electronic warfare capabilities. and logistics systems are critical to realizing the The helmet mounted display in particular continues to have significant technical deficiencies that make it less functional than legacy equipment. The display is integral to the mission systems architecture, reducing pilot workload, and the overall JSF concept of operations—displaying key aircraft performance information as well as tactical situational awareness and weapons employment information on the pilot’s helmet visor, replacing conventional heads-up display systems. Helmet problems include integration of the night vision capability, display jitter, and latency (or delay) in transmitting sensor data.helmet unable to fully meet warfighter requirements—unsuitable for flight tasks and weapon delivery, as well as creating an unmanageable pilot workload, and may place limitations on the JSF’s operational environment, according to program officials. The program office is pursuing a dual path to compensate for the technical issues by developing a second, less capable helmet while trying to fix the first helmet design; this development effort will cost more than $80 million. The selected helmet will not be integrated into the baseline aircraft until These shortfalls may lead to a 2014 or later, increasing the risks of a major system redesign, retrofits of already built aircraft, or changes in concepts of operation. The Autonomic Logistics Information System (ALIS) is an integral part of the JSF system and serves as an information portal to JSF-unique and external systems, implements and automates logistics processes, and provides decision aids to reduce support resources such as manpower and spares. The ALIS is key technology aimed at improving and streamlining logistics and maintenance functions in order to reduce life cycle costs. It is designed to be proactive–recognize problems and initiate correct responses automatically. The JSF test team operational assessment report concluded that an early release model of ALIS was not mature, did not meet operational suitability requirements, and would require substantial improvements to achieve sortie generation rates and life cycle cost requirements. In particular, the current configuration was not adequate for deployed operations–its current weight, environmental support, connectivity, and security requirements make it difficult to support detachments, operational testing, and forward operations, especially vital to the Marine Corps plans. The report noted that there is no approved concept or design for this capability, no funding identified, and stated a concern that there may be no formal solution prior to Marine Corps declaring an initial operating capability. Operational testers also identified concerns about data and interoperability with service maintenance systems. Program officials have identified deployable ALIS as a development-funded effort structured to address the difficulties surrounding the deployment of the current ALIS suite of equipment. The formal solution is expected to be ready for fielding in 2015. The program has not yet demonstrated a stable design and manufacturing process capable of efficient production. Engineering changes are persisting at relatively high rates and additional changes will be needed as testing continues. Manufacturing processes and performance indicators show some progress, but performance on the first four low-rate initial production contracts has not been good. All four have experienced cost overruns and late aircraft deliveries. In addition, the government is also incurring substantial additional costs to retrofit produced aircraft to correct deficiencies discovered in testing. Until manufacturing processes are in control and engineering design changes resulting from information gained during developmental testing are reduced, there is risk of further cost growth. Actions the Department has taken to restructure the program have helped, but remaining concurrency between flight testing and production continues to put cost and schedule at risk (see figure 7). Even with the substantial reductions in near-term procurement quantities, DOD is still investing billions of dollars on hundreds of aircraft while flight testing has years to go. As was the experience with building the development test aircraft, manufacturing the production aircraft is costing more and taking longer than planned. Cost overruns and delivery slips indicate that manufacturing processes, worker learning, quality control, and supplier performance are not yet sufficiently mature to handle the volume of work scheduled. Cost overruns on each of the first four annual procurement contracts are currently projected to total about $1 billion (see table 3). According to program documentation, through the cost sharing provisions in these contracts, the government’s share of the total overrun is about $672 million. On average, the government is paying an additional $11 million for each of the 63 aircraft under contract (58 are U.S. aircraft and 5 are for international partners). There is risk of additional cost overruns because all work is not completed. Defense officials reduced the buy quantity in the fifth annual procurement contract to help fund these cost overruns and additional retrofit costs to fix deficiencies discovered in testing. While Lockheed Martin, the prime contractor, is demonstrating somewhat better throughput capacity and showing improved performance indicators, the lingering effects of critical parts shortages, out of station workquality issues continue to be key cost and schedule drivers on the first four production lots. Design modifications to address deficiencies discovered in testing, incorporation of bulkhead and wing process improvements, and reintroduction of the carrier variant into the manufacturing line further impacted production during 2011. Lockheed had expected to deliver 31 procurement aircraft by the end of 2011 but delivered only nine aircraft. Each was delivered more than 1 year late. The manufacturing effort has a long way to go with thousands of aircraft planned for production over the next 25 years. Through fiscal year 2011, only 6 percent of the total procurement funding needed to complete the JSF program had been appropriated. As the rate of production is expected to increase substantially starting in 2015, it is vital that the contractor achieve an efficient manufacturing process. Several positive accomplishments may spur improved future performance. Lockheed implemented an improved and comprehensive integrated master schedule, loaded the new program data from restructuring, and completed a schedule risk assessment, as we recommended several years ago. Also, Defense Contract Management Agency (DCMA) and JSF program officials believe that Lockheed Martin has made a concerted effort to improve its earned value management system (EVMS) in order to comply with federal standards. Initial reviews of the new procedures, tools, and training indicate that the company is on track to have its revised processes approved by DCMA this year. Pratt & Whitney, the engine manufacturer, has delivered 54 production engines and 21 lift fans as of early May 2012. Like the aircraft system, the propulsion system is still under development and the program is working to complete testing and fix deficiencies while concurrently delivering engines under the initial procurement contracts. The program office’s estimated cost for the system development and demonstration of the engine has increased by 73 percent, from $4.8 billion to about $8.4 billion, since the start of development. Engine deliveries continue to miss expected contract due dates but still met aircraft need dates. Supplier performance problems and design changes are driving late engine deliveries. Lift fan system components and processes are driving cost and schedule problems. Going forward, effectively managing the expanding global supplier network – which consists of hundreds of suppliers around the world–is fundamental to meeting production rate and throughput expectations. DOD’s Independent Manufacturing Review Team 2009 report identified global supply chain management as the most critical challenge for meeting production expectations. The cooperative aspect of the supply chain provides both benefits and challenges. The international program structure is based on a complex set of relationships involving both government and industry from the United States and eight other countries. Overseas suppliers are playing a major and increasing role in JSF manufacturing and logistics. For example, center fuselage and wings will be manufactured by Turkish and Italian suppliers, respectively, as second sources. In addition to ongoing supplier challenges–parts shortages, failed parts, and late deliveries– incorporating international suppliers presents other challenges. The program must deal with exchange rate fluctuations, disagreements over work shares, and technology transfer concerns. To date, the mostly U.S.-based suppliers have sometimes struggled to develop critical and complex parts while others have had problems with limited production capacity. Lockheed Martin has implemented a stricter supplier assessment program to help manage supplier performance. We and some defense offices cautioned the Department years ago about the risks posed by the extremely high degree of concurrency, or overlap, among the JSF development, testing, and production activities. In the first four production lots, the U. S. government will incur an estimated $373 million in retrofit costs on already-built aircraft to correct deficiencies discovered in development testing. This is in addition to the $672 million for the government’s share of contract cost overruns. The program office projects additional retrofit costs due to concurrency through the 10th low rate initial production contract, but at decreasing amounts. Questions about who will pay for additional retrofit costs under the fixed price contract–the contractor or the government–and how much, delayed contract negotiations on the fifth lot. While the contract is not yet definitized, a December 2011 undefinitized contract action established that the Government and contractor would share equally in known concurrency costs and that any newly discovered concurrency changes will be added to the contract and will cause a renegotiation of the target cost, but with no profit, according to program officials. Defense officials have long acknowledged the substantial concurrency built into the JSF acquisition strategy, but until recently stated that risks were manageable. However, a recent high-level departmental review of JSF concurrency determined that the program is continuing to find problems at a rate more typical of early design experience on previous aircraft development programs, questioning the assumed design maturity that supported the highly concurrent acquisition strategy. DOD’s November 2011 report concluded that the “team assesses the current confidence in the design maturity of the F-35 to be lower than one would expect given the quantity of LRIP aircraft procurements planned and the potential cost of reworking these aircraft as new test discoveries are made. This lack of confidence, in conjunction with the concurrency driven consequences of the required fixes, supports serious reconsideration of procurement and production planning.” The review identified substantial risk of needed modifications to already produced aircraft as the flight testing enters into more strenuous test activities. Already, as a result of problems found in less strenuous basic airworthiness testing, critical design modifications are being fed back through the production line. For example, the program will be cutting in aircraft modifications to address bulkhead cracks discovered during airframe ground testing and STOVL auxiliary inlet door durability issues. More critical test discoveries are likely as the program moves into the more demanding phases of testing. We note also that concurrency risks are not just limited to incurring extra production costs, but ripple throughout the JSF program slowing aircraft deliveries, decreasing availability of aircraft, delaying pilot and maintainer training, and hindering the stand-up of base maintenance and supply activities, among other impacts. Producing aircraft before testing sufficiently demonstrates the design is mature increases the likelihood that more aircraft will be exposed to the need for the retrofit of future design changes, which drives cost growth, schedule delays, and manufacturing inefficiencies. Design changes needed in one JSF variant could also impact the other two variants, reducing efficiencies necessary to lower production and operational costs with common parts and manufacturing processes for the three variants. While the JSF program’s engineering change traffic–the monthly volume of changes made to engineering drawings–is declining, it is still higher than expected for a program entering its sixth year of production. The total number of engineering drawings continues to grow due to design changes, discoveries during ground and flight testing, and other revisions to drawings. Some level of design change is expected during the production cycle of any new and highly technical product, but excessive changes raise questions about the stability of the JSF’s design and its readiness for higher levels of production. Figure 8 tracks design changes over time and shows that changes are expected to persist at an elevated pace through 2019. A weapon system’s reliability growth rate is a good indicator of design maturity. Reliability is a function of specific design characteristics. A weapon system is considered reliable when it can perform over a specified period of time without failure, degradation, or need of repair. During system acquisition, reliability growth improvements should occur over time through a process of testing, analyzing, and fixing deficiencies through design changes or manufacturing process improvements. Once fielded, there are limited opportunities to improve a system’s reliability without costly redesign and retrofit. A system’s reliability rate directly affects its life cycle operating and support costs. We have reported in the past that it is important to demonstrate that the system reliability is on track to meet goals before production begins as changes after production commences can be inefficient and costly. According to program office data, the CTOL and STOVL variants are behind expected reliability growth plans at this point in the program. Figure 9 depicts progress of each variant in demonstrating mean flying hours between failures as reported by the program office in October 2011 and compares them to 2010 rates, the expectation at this point in time, and the ultimate goal at maturity. As of October 2011, reliability growth plans called for the STOVL to have achieved at least 2.2 flying hours between failures and the CTOL at least 3.7 hours by this point in the program. The STOVL is significantly behind plans, achieving about 0.5 hours between failures, or less than 25 percent of the plan. CTOL variant has demonstrated 2.6 hours between failures, about 70 percent of the rate expected at this point in time. The carrier variant is slightly ahead of its plan; however, it has flown many fewer flights and hours than the other variants. JSF officials said that reliability rates are tracking below expectations primarily because identified fixes to correct deficiencies are not being implemented and tested in a timely manner. Officials also said the growth rate is difficult to track and to confidently project expected performance at maturity because of insufficient data from the relatively small number of flight hours flown. Based on the initial low reliability demonstrated thus far, the Director of Operational Test and Evaluation reported that the JSF has a significant challenge ahead to provide sufficient reliability growth to meet the operational requirement. Restructuring actions by the Department since early 2010 have provided the JSF program with more achievable development and production goals, and has reduced, but not eliminated, risks of additional retrofit costs due to concurrency in current and future lots. The Department has progressively lowered the production ramp-up rate and cut near term procurement quantities; fewer aircraft procured while testing is still ongoing lowers the risk of having to modify already produced aircraft. However, even with the most recent reductions in quantities, the program will still procure a large number of aircraft before system development is complete and flight testing confirms that the aircraft design and performance meets warfighter requirements. Table 4 shows the current plan that will procure 365 aircraft for $69 billion before the end of planned developmental flight tests. The JSF remains the critical centerpiece of DOD’s long-term tactical aircraft portfolio. System development of the aircraft and engine, ongoing for over a decade, continues to experience significant challenges. The program’s strategic framework – laden with concurrency – has proved to be problematic and, ultimately, a very costly approach. DOD has lately acknowledged the undue risks from concurrency and accordingly reduced near-term procurement and devoted more time and resources to development and testing. These prudent actions have reduced, but not eliminated, concurrency risks of future cost growth from test discoveries driving changes to design and manufacturing processes. Substantial concurrency costs are expected to continue for several more years. Concurrency risks are not just limited to incurring extra modification costs, but ripple throughout the JSF program slowing aircraft deliveries, delaying release of software to testing, delaying pilot and maintainer training, and hindering the stand-up of base maintenance and supply activities, among other impacts. Extensive restructuring actions over the last 2-plus years have placed the JSF program on a more achievable course, albeit a lengthier and more expensive one. At the same time, the near-constant churn, or change, in cost, schedule, and performance expectations has hampered oversight and insight into the program, in particular the ability to firmly assess progress and prospects for future success. The JSF program now needs to demonstrate that it can effectively perform against cost and schedule targets in the new baseline and deliver on its promises so that the warfighter can confidently establish basing plans, retire aging legacy aircraft, and acquire a support infrastructure. Addressing affordability risks will be critical in determining how many aircraft the U.S. and international partners can ultimately acquire and sustain over the life cycle. As currently structured, the program will require unprecedented levels of procurement funding during a period of more constrained defense budget expectations. Aircraft deferrals, risky funding assumptions, and future budget constraints make it prudent to evaluate potential impacts from reduced levels of funding. If funding demands cannot be fully met, it would be important for congressional and defense decisionmakers to understand the programmatic and cost impacts from lower levels of funding; however, DOD officials have not thoroughly analyzed JSF impacts should funding expectations be unmet. Going forward, it will be imperative to bring stability to the program and provide a firm understanding of near- and far-term financial requirements so that all parties—the Congress, Defense Department, and international partners— can reasonably project future budgets, set priorities, and make informed business-based decisions amid a tough fiscal environment. Substantial cost overruns and delivery delays on the first four low rate initial production contracts indicate a need to improve inefficient manufacturing and supply processes before ramping up production to the rates expected. While some manufacturing and supply performance indicators are showing some improvements, parts shortages, supplier quality and performance problems, and manufacturing workarounds still need to be addressed. DOD’s Independent Manufacturing Review Team identified global supply chain management as the most critical challenge for meeting production expectations. Effectively managing the expanding network of global suppliers and improving the supply chain will be key to improving cost and schedule outcomes, increasing manufacturing throughput, and enabling higher production rates. Substantial quantities of JSF aircraft have been deferred to future years and funding requirements now average $12.5 billion through 2037. Aircraft deferrals, risky funding assumptions, and future budget constraints make it prudent to evaluate potential impacts from reduced levels of funding. Therefore, we recommend that the Secretary of Defense direct the Director of Cost Assessment and Program Evaluation perform an independent analysis of the impact lower annual funding levels would have on the program’s cost and schedule. This sensitivity analysis should determine the impact of funding on aircraft deliveries, unit costs, and total tactical air force structure resulting from at least three different assumed annual funding profiles, all lower than the current funding projection. Finally, because of the complexity and criticality of the global supply chain that has already experienced some problems, we recommend the Under Secretary of Defense for Acquisition, Technology and Logistics direct the JSF program office to conduct a comprehensive assessment of the supply chain and transportation network to ensure it is organized, secure, and capable of producing and delivering parts in the quantities and times needed to effectively and efficiently build and sustain over 3,000 aircraft for the U.S. and international partners. This assessment should summarize opportunities as well as challenges, augmenting and building upon the earlier efforts of the Independent Manufacturing Review Team and the recent sustainment study. DOD provided us written comments on a draft of this report, which are reprinted in appendix II. DOD partially concurred with our first recommendation and fully concurred with our second. Officials also provided technical comments that we incorporated in the final report as appropriate. DOD partially concurred with our recommendation to perform a sensitivity analysis of the impact lower annual funding levels would have on JSF cost and schedule and the total tactical air force structure. The Department stated that the Director of Cost Assessment and Program Evaluation regularly performs this kind of analysis as part of the annual budget review process. However, the Department’s response emphasized that such analysis is pre-decisional and did not believe that sensitivity analyses based on notional funding levels should be published. We agree that this budget analysis has value and that it need not be published publicly; however, we believe its usefulness extends beyond the current budget period. Increasingly tough budget decisions amid a likely declining top-line defense budget are in the forecast, and this kind of sensitivity analysis of the impact of potential lower funding levels could better inform defense leadership and the Congress on the longer-term impacts on JSF program outcomes and force structure implications. DOD concurred with our recommendation to comprehensively assess the global supply chain and transportation network. The written response indicated that annual production readiness reviews undertaken by the contractor and JSF program office were sufficient and better structured to manage issues over several years than a one time, large scale study. We agree that annual targeted reviews are important and conducive to good near-term management, but continue to believe that these should be supplemented by a longer-term and more forward-looking study as we have recommended along the lines of the Independent Manufacturing Review Team. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in Appendix VI. To determine the Joint Strike Fighter (JSF) program’s progress in meeting cost, schedule, and performance goals, we received briefings by program and contractor officials and reviewed financial management reports, budget documents, annual Selected Acquisition Reports, monthly status reports, performance indicators, and other data. We identified changes in cost and schedule, and obtained officials’ reasons for these changes. We interviewed officials from the JSF program, contractors, and the Department of Defense (DOD) to obtain their views on progress, ongoing concerns and actions taken to address them, and future plans to complete JSF development and accelerate procurement. At the time of our review, the most recent Selected Acquisition Report available was dated December 31, 2011. Throughout most of our review, DOD was in the process of preparing the new acquisition program baseline, issued in March 2012, which reflected updated cost and schedule projections. In assessing program cost estimates, we evaluated program cost estimates in the Selected Acquisition Reports since the program’s inception, reviewed the recent independent cost estimate completed by DOD’s Cost Analysis and Program Evaluation (CAPE), and analyzed fiscal year President’s Budget data. We interviewed JSF program office officials, members of CAPE, prime and engine contractors, and Defense Contract Management Agency officials to understand methodology, data, and approach in developing cost estimates and monitoring cost performance. To assess plans, progress, and risks in test activities, we examined program documents and interviewed DOD, program office, and contractor officials about current test plans and progress. To assess progress toward test plans, we compared the number of test points accomplished as of December 2011 to the program’s 2011 plan for test point progress. We also discussed related software development, test, and integration with Defense Contract Management Agency (DCMA) and Director, Operational Test, and Evaluation (DOT&E) officials and reviewed DOT&E annual assessments of the JSF program, the Joint Strike Fighter Operational Test Team Report, and the F-35 Joint Strike Fighter Concurrency Quick Look Review. To assess the program’s plans and risk in manufacturing and its capacity to accelerate production, we analyzed manufacturing cost and work performance data to assess progress against plans. We reviewed data and briefings provided by the program and DCMA to assess supplier performance and ability to support accelerated production in the near term. We also determined reasons for manufacturing delays, discussed program and contractor plans to improve, and projected the impact on development and operational tests. We interviewed contractor and DCMA officials to discuss the Earned Value Management System but did not conduct any analysis since the system has not yet been re-validated by DCMA. In performing our work, we obtained information and interviewed officials from the JSF Joint Program Office, Arlington, Virginia; Defense Contract Management Agency, Fort Worth, Texas; Lockheed Martin Aeronautics, Fort Worth, Texas; Defense Contract Management Agency, East Hartford, Connecticut; and Pratt & Whitney, Middletown, Connecticut. We also met with and obtained data from the following offices from the Secretary of Defense in Washington, D.C.: Director, Operational Test and Evaluation; Cost Assessment and Program Evaluation; and Systems Engineering. To assess the reliability of DOD and contractor data we reviewed the sources and uses of the data, evaluated existing information about the data, and interviewed agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Start of system development and demonstration approved. Primary GAO message Critical technologies needed for key aircraft performance elements not mature. Program should delay start of system development until critical technologies mature to acceptable levels. DOD response and actions DOD did not delay start of system development and demonstration stating technologies were at acceptable maturity levels and will manage risks in development. The program undergoes re-plan to address higher than expected design weight, which added $7 billion and 18 months to development schedule. We recommended that the program reduce risks and establish executable business case that is knowledge-based with an evolutionary acquisition strategy. DOD partially concurred but did not adjust strategy, believing that its approach is balanced between cost, schedule and technical risk. Program sets in motion plan to enter production in 2007 shortly after first flight of the non- production representative aircraft. The program plans to enter production with less than 1 percent of testing complete. We recommended program delay investing in production until flight testing shows that JSF performs as expected. DOD partially concurred but did not delay start of production because it believed the risk level was appropriate. Congress reduced funding for first two low-rate production buys thereby slowing the ramp up of production. Progress was being made but concerns remained about undue overlap in testing and production. We recommended limits to annual production quantities to 24 a year until flying quantities are demonstrated. DOD non-concurred and felt that the program had an acceptable level of concurrency and an appropriate acquisition strategy. DOD implemented a Mid- Course Risk Reduction Plan to replenish management reserves from about $400 million to about $1 billion by reducing test resources. We believed new plan actually increased risks and recommended that DOD revise the plan to address concerns about testing, use of management reserves, and manufacturing. We determined that the cost estimate was not reliable and that a new cost estimate and schedule risk assessment is needed. DOD did not revise risk plan or restore testing resources, stating that it will monitor the new plan and adjust it if necessary. Consistent with a report recommendation, a new cost estimate was eventually prepared, but DOD refused to do a risk and uncertainty analysis that we felt was important to provide a range estimate of potential outcomes. The program increased the cost estimate and adds a year to development but accelerated the production ramp up. Independent DOD cost estimate (JET I) projects even higher costs and further delays. Primary GAO message Because of development problems, we stated that moving forward with an accelerated procurement plan and use of cost reimbursement contracts is very risky. We recommended the program report on the risks and mitigation strategy for this approach. DOD response and actions DOD agreed to report its contracting strategy and plans to Congress. In response to our report recommendation, DOD subsequently agreed to do a schedule risk analysis. The program reported completing the first schedule risk assessment in summer 2011 with plans to update about every 6 months. In February 2010, the Department announced a major restructuring of the JSF program, including reduced procurement and a planned move to fixed-price contracts. The program was restructured to reflect findings of recent independent cost team (JET II) and independent manufacturing review team. As a result, development funds increased, test aircraft were added, the schedule was extended, and the early production rate decreased. Because of additional costs and schedule delays, the program’s ability to meet warfighter requirements on time is at risk. We recommend the program complete a full comprehensive cost estimate and assess warfighter and IOC requirements. We suggest that Congress require DOD to prepare a “system maturity matrix”–a tool for tying annual procurement requests to demonstrated progress. DOD continued restructuring actions and announced plans to increase test resources and lower the production rate. Independent review teams evaluated aircraft and engine manufacturing processes. As we projected in this report, cost increases later resulted in a Nunn-McCurdy breach. Military services are currently reviewing capability requirements as we recommended. Restructuring continued following the Nunn-McCurdy certification with additional development cost increases; schedule growth; further reduction in near-term procurement quantities; and decreased the rate of increase for future production. The Secretary of Defense placed the STOVL variant on a 2 year probation; decoupled STOVL from the other variants in the testing program because of lingering technical issues; and reduced STOVL production plans for fiscal years 2011 to 2013. The restructuring actions are positive and if implemented properly, should lead to more achievable and predictable outcomes. Concurrency of development, test, and production is substantial and provides risk to the program. We recommended the program maintain funding levels as budgeted in the FY 2012-2016 future years’ defense plan; establish criteria for STOVL probation; and conduct an independent review of software development, integration, and test processes. DOD concurred with all three of the recommendations. In January 2012, the Secretary of Defense lifted STOVL probation, citing improved performance. Subsequently, the Secretary further reduced procurement quantities, decreasing funding requirements through 2016. The initial independent software assessment began in September 2011, and ongoing reviews are planned through 2012. In January 2011, the Secretary of Defense placed the short takeoff and vertical landing (STOVL) aircraft on “probation” for 2 years, citing technical issues unique to the variant that would add to the aircraft’s cost and weight. The probation limited the U.S. STOVL procurement to three aircraft in fiscal year 2011 and six aircraft in fiscal year 2012 and decoupled STOVL testing from CV and CTOL testing so as not to delay those variants. The 2 year probation was expected to provide enough time to address STOVL-specific technical issues, engineer solutions, and assess their impact. It was presumed that at the end of probation, an informed decision could be made about whether and how to proceed with , STOVL, but no specific exit criteria were established. In our 2011 reportwe recommended that the program establish criteria for the STOVL probation period and take additional steps to sustain individual attention on STOVL-specific issues to ensure cost and schedule milestones were achieved in order to deliver required warfighter capabilities. Under Secretary of Defense Acquisition, Technology and Logistics Report to Congress on Probationary Period in Development of Short Take-off, Vertical Landing Variant of the Joint Strike Fighter: National Defense Authorization Act for Fiscal Year 2012, section 148. scrutiny than the other two variants. According to the department, interim solutions are in place to mitigate the lingering technical issues with the STOVL and permanent solutions are in varying stages of development or implementation. While the probation period did not include specific criteria, the reasons given for probation were to address technical issues, engineer solutions, and assess impact, and it was expected to take 2 years to do so. Although we note that several technical issues have been addressed and some potential solutions engineered, assessing whether the deficiencies are resolved is ongoing and, in some cases, will not be known for years. Table 5 provides details on the STOVL technical problems identified at the onset of probation, the efforts to resolve the problems, and timeframes for implementing fixes. According to the program, of the five specific problems cited, two are considered to be fixed (bulkhead cracks and air inlet door loads) while the other three have temporary fixes in place. The Director, Operational Test and Evaluation (DOT&E) officials reported that significant work remains to verify and incorporate modifications to correct known STOVL deficiencies and prepare the system for operational use. Until the proposed technical solutions have been fully tested and demonstrated, it cannot be determined if the technical problems have been resolved. In addition to the contact name above, the following staff members made key contributions to this report: Bruce Fairbairn, Assistant Director; Charlie Shivers; Sean Merrill; LeAnna Parkey; Dr. W. Kendal Roberts; Laura Greifner; and Matt Lea. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-12-400SP. Washington, D.C.: March 29, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD’s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. Tactical Aircraft: Air Force Fighter Force Structure Reports Generally Addressed Congressional Mandates, but Reflected Dated Plans and Guidance, and Limited Analyses. GAO-11-323R. Washington, D.C.: February 24, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington D.C.: December 16, 2010. Joint Strike Fighter: Assessment of DOD’s Funding Projection for the F136 Alternate Engine. GAO-10-1020R. Washington, D.C.: September 15, 2010. Tactical Aircraft: DOD’s Ability to Meet Future Requirements is Uncertain, with Key Analyses Needed to Inform Upcoming Investment Decisions. GAO-10-789. Washington, D.C.: July 29, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Joint Strike Fighter: Significant Challenges and Decisions Ahead. GAO-10-478T. Washington, D.C.: March 24, 2010. Joint Strike Fighter: Additional Costs and Delays Risk Not Meeting Warfighter Requirements on Time. GAO-10-382. Washington, D.C.: March 19, 2010. Joint Strike Fighter: Significant Challenges Remain as DOD Restructures Program. GAO-10-520T. Washington, D.C.: March 11, 2010. Joint Strike Fighter: Strong Risk Management Essential as Program Enters Most Challenging Phase. GAO-09-711T. Washington, D.C.: May 20, 2009. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Joint Strike Fighter: Accelerating Procurement before Completing Development Increases the Government’s Financial Risk. GAO-09-303. Washington D.C.: March 12, 2009. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Joint Strike Fighter: Impact of Recent Decisions on Program Risks. GAO-08-569T. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Tactical Aircraft: DOD’s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis. GAO-06-717R. Washington, D.C.: May 22, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapons Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Joint Strike Fighter: Management of the Technology Transfer Process. GAO-06-364. Washington, D.C.: March 14, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington, D.C: April 6, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005. | The F-35 Lightning II, also known as the Joint Strike Fighter (JSF), is the Department of Defenses (DOD) most costly and ambitious aircraft acquisition, seeking to simultaneously develop and field three aircraft variants for the Air Force, Navy, Marine Corps, and eight international partners. The JSF is critical to DODs long-term recapitalization plans to replace hundreds of legacy aircraft. Total U.S. investment is now projected at nearly $400 billion to develop and acquire 2,457 aircraft through 2037 and will require a long-term, sustained funding commitment. The JSF has been extensively restructured over the last 2 years to address relatively poor cost, schedule, and performance outcomes. This report, prepared in response to the National Defense Authorization Act for Fiscal Year 2010, addresses (1) JSF program cost and schedule changes and affordability issues; (2) performance objectives, testing results, and technical risks; and (3) contract costs, concurrency impacts, and manufacturing. GAOs work included analyses of a wide range of program documents and interviews with defense and contractor officials. Joint Strike Fighter restructuring continued throughout 2011 and into 2012, adding to cost and schedule. The new program baseline projects total acquisition costs of $395.7 billion, an increase of $117.2 billion (42 percent) from the prior 2007 baseline. Full rate production is now planned for 2019, a delay of 6 years from the 2007 baseline. Unit costs per aircraft have doubled since start of development in 2001. Critical dates for delivering warfighter requirements remain unsettled because of program uncertainties. While the total number of aircraft DOD plans to buy has not changed, it has for 3 straight years reduced near-term procurement quantities, deferring aircraft and costs to future years. Since 2002, the total quantity through 2017 has been reduced by three-fourths, from 1,591 to 365. Affordability is a key challengeannual acquisition funding needs average about $12.5 billion through 2037 and life-cycle operating and support costs are estimated at $1.1 trillion. DOD has not thoroughly analyzed program impacts should funding expectations be unmet. Overall performance in 2011 was mixed as the program achieved 6 of 11 important objectives. Developmental flight testing gained momentum and is now about 21 percent complete with the most challenging tasks still ahead. Performance of the short takeoff and vertical landing variant improved this year and its probation period to fix deficiencies was ended after 1 year with several fixes temporary and untested. Developing and integrating the more than 24 million lines of software code continues to be of concern. Late software releases and concurrent work on multiple software blocks have delayed testing and training. Development of critical mission systems providing core combat capabilities remains behind schedule and risky. To date, only 4 percent of the mission systems required for full capability have been verified. Deficiencies with the helmet mounted display, integral to mission systems functionality and concepts of operation, are most problematic. The autonomic logistics information system, integral technology for improving aircraft availability and lowering support costs, is not fully developed. Most of the instability in the program has been and continues to be the result of highly concurrent development, testing, and production activities. Cost overruns on the first four annual procurement contracts total more than $1 billion and aircraft deliveries are on average more than 1 year late. Program officials said the governments share of the cost growth is $672 million; this adds about $11 million to the price of each of the 63 aircraft under those contracts. Effectively managing the expanding network of global suppliers will be key to improving program outcomes, increasing manufacturing throughput, and enabling higher production rates. In addition to contract overruns, concurrency costs of at least $373 million have been incurred on production aircraft to correct deficiencies found in testing. The manufacturing process is still absorbing higher than expected number of engineering changes resulting from flight testing, changes which are expected to persist at elevated levels into 2019, making it difficult to achieve efficient production rates. More design and manufacturing changes are expected as testing continues, bringing risks for more contract overruns and concurrency costs. Even with the substantial reductions in near-term production quantities, DOD still plans to procure 365 aircraft for $69 billion before developmental flight tests are completed. GAO recommends that (1) DOD analyze cost and program impacts from potentially reduced future funding levels and (2) assess the capability and challenges facing the JSFs global supply chain. DOD concurred with the second recommendation and agreed with the value of the first, but believed its annual budget efforts are sufficient. GAO maintains that more robust data is needed and could be useful to congressional deliberations. |
MIPPA defines ADI services to include diagnostic CT, MRI, and NM, including positron emission tomography (PET). CT is an imaging modality that uses ionizing radiation and computers to produce cross- sectional images of internal organs and body structures. MRI is an imaging modality that uses powerful magnets, radio waves, and computers to create cross-sectional images of internal body tissues. NM is the use of radioactive materials in conjunction with an imaging modality to produce images that show both structure and function within the body. During an NM service, such as a PET scan, a patient is administered a small amount of radioactive substance, called a radiopharmaceutical or radiotracer, which is subsequently tracked by a radiation detector outside the body to render time-lapse images of the radioactive material as it moves through the body. Imaging equipment that uses ionizing radiation—such as CT and NM— poses greater potential short- and long-term health risks to patients than other imaging modalities, such as ultrasound. This is because ionizing radiation has enough energy to potentially damage DNA and thus increase a person’s lifetime risk of developing cancer. In addition, exposure to very high doses of this radiation can cause short-term injuries, such as burns or hair loss. Each of the modalities using ionizing radiation uses different amounts of such radiation. For example, conventional X-ray imaging, in which X-rays are projected through a patient’s body to produce two-dimensional pictures of organs and tissue, uses relatively low amounts of radiation in order to render a diagnostic- quality radiographic image. Because CT and NM services can involve repeated or extended exposure to ionizing radiation, they are associated with the administration of higher radiation doses than conventional X-ray imaging systems. In its 2010 initiative to reduce unnecessary radiation, FDA reported that the effective dose from a CT is roughly equivalent to 100 to 800 chest X-rays, whereas a NM service is equivalent to 10 to 2,050 chest X-rays.higher-resolution images, FDA advises that an optimal radiation dose is one that is as low as reasonably achievable while maintaining sufficient image quality to meet the clinical need. Although MRIs do not use ionizing radiation, they pose other potential dangers; for example, magnetic fields from the MRI unit can result in a “projectile effect,” in which magnetic material, such as the metal in oxygen cylinders or wheelchairs, can be pulled suddenly and—often violently—toward the imaging equipment at times while a patient lies in the center of the magnet and while medical personnel are attending to the patient. MIPPA requires the establishment of procedures to ensure that accrediting organizations include standards specific to each imaging modality for ADI suppliers in the following five areas: (1) qualifications of medical personnel who are not physicians and who furnish the technical component of ADI services; (2) qualifications and responsibilities of medical directors and supervising physicians; (3) procedures to ensure that equipment used in furnishing the technical component of ADI services meets performance specifications; (4) procedures to ensure the safety of beneficiaries and staff; and (5) establishment and maintenance of a quality-assurance and quality-control program that ensures the reliability, clarity, and accuracy of the technical quality of diagnostic images produced by suppliers. MIPPA accreditation applies only to suppliers paid under the Medicare physician fee schedule that provide the technical component of ADI services. Suppliers paid under the physician fee schedule include physician offices and independent diagnostic testing facilities, which are independent of a hospital or physician office and provide only diagnostic outpatient services. MIPPA accreditation does not apply to the technical component of ADI services provided in Medicare settings not paid under the physician fee schedule, such as hospital inpatient or outpatient departments.of the three CMS-designated organizations and pay the organization an accreditation fee. Among other things, CMS requires accrediting organizations to evaluate ADI suppliers during the initial application regarding compliance with MIPPA requirements—such as qualifications of personnel—as well as during mid-cycle audit procedures to ensure suppliers maintain compliance for the duration of the accreditation cycle, which is a 3-year period. ACR and IAC primarily grant initial accreditation through an online application and review of suppliers’ documents, while TJC uses an online application but also conducts an on-site visit for each supplier prior to granting accreditation. To become accredited, ADI suppliers must first select one Information about the three accrediting organizations that CMS has designated for ADI suppliers—ACR, IAC, and TJC—follows in table 1. CMS has several responsibilities to ensure the quality of ADI services paid under Medicare’s physician fee schedule. In addition to selecting accrediting organizations, CMS is responsible for ensuring that Medicare payment is made only to ADI suppliers accredited by a CMS-approved accrediting organization. MIPPA requires CMS to oversee the accrediting organizations and authorizes CMS to modify the list of selected accrediting organizations, if necessary. Federal regulations specify that CMS may conduct “validation audits” of accredited ADI suppliers and provide for the withdrawal of CMS approval of an accrediting organization at any time if CMS determines that the accrediting organization no longer adequately ensures that ADI suppliers meet or exceed Medicare requirements. In addition, accrediting organizations are required to report serious care problems that pose immediate jeopardy to a beneficiary or to the general public to CMS within 2 business days of identifying such problems. CMS also has ongoing requirements for accrediting organizations; among other things, accrediting organizations are responsible for using mid-cycle audit procedures, such as unannounced site visits, to ensure that accredited suppliers maintain compliance with MIPPA’s requirements for the duration of the accreditation cycle. MQSA, as amended by the Mammography Quality Standards Reauthorization Acts of 1998 and 2004, established national quality standards for mammography to help ensure the high quality of images and image interpretation that mammography facilities produce. Under MQSA, FDA—acting on behalf of the Department of Health and Human Services (HHS)—has several responsibilities to ensure the quality of mammography: establishing quality standards for mammography equipment, personnel, and practices; ensuring that all mammography facilities are accredited by an FDA- approved accrediting body and have obtained a certificate permitting them to provide mammography services from FDA or an FDA- approved certification agency; ensuring that all mammography equipment is evaluated at least annually by a qualified medical physicist and that all mammography facilities receive an annual compliance inspection from an FDA- approved inspector; and performing annual evaluations of the accreditation bodies and certification agencies. CMS did not establish minimum national standards for ADI accreditation, and instead required each accrediting organization to establish its own specific standards for quality and safety of ADI services. In 2009, CMS solicited applications from accrediting organizations and outlined the information that needed to be furnished by each organization to be considered for approval. As part of its application requirements, CMS adopted the broad MIPPA criteria for ADI accreditation and required each accrediting organization to provide a detailed description of how its standards satisfy these requirements. For example, CMS required each accrediting organization to have standards regarding qualifications for suppliers’ technologists and medical directors, but allowed the accrediting organizations to establish their own minimum certification, experience, and continuing education requirements. In addition, CMS required accrediting organizations to provide documentation of other requirements, such as detailed information about the individuals who perform evaluations for accrediting organizations and a description of the organization’s data management and analysis capabilities in support of its surveys and accreditation decisions. CMS received three applications from its solicitation and in January 2010, the agency reported that an internal professional panel had reviewed the applications and determined that all three organizations provided sufficient evidence of their ability to accredit ADI suppliers on the basis of CMS’s requirements. CMS drafted more specific standards for the accreditation of ADI suppliers in 2010, but did not publish these standards or propose adopting them. A CMS official told us that the agency developed the draft standards in conjunction with FDA and incorporated comments from each of the accrediting organizations. This official also told us that the draft standards were not put through the rulemaking process because the agency was focused on developing regulations for the Patient Protection and Affordable Care Act, which was enacted 2010. As of January 2013, these CMS standards remained in draft form, and officials told us that the agency did not have a specific timeline for publishing the standards in a proposed rule. Representatives from the three approved accrediting organizations—as well as 9 of the 11 organizations with imaging expertise from which we obtained information—recommended that CMS adopt minimum national standards, which would help to ensure that all accredited ADI suppliers meet a minimum level of quality and safety. In addition, we have reported that the quality of mammography services improved under MQSA primarily as a result of setting national quality- assurance standards—such as those related to personnel qualifications and clinical image quality—and establishing enforcement mechanisms to ensure that the standards are met by all mammography providers. The list of recommended standards was derived from recommendations obtained from at least 5 of 11 organizations with imaging expertise about the specific types of standards that they would expect accrediting organizations to use. variation in state requirements for training and certification of technologists, and lack of training is widely recognized as a cause of significant errors in the provision of ADI services. Another of the 11 organizations, the American Society of Radiologic Technologists, reported that imaging services performed by individuals who are not experienced, educated, or certified in a specific imaging modality could compromise the quality of images or jeopardize the health or safety of supplier staff or Medicare beneficiaries. In addition, prior to granting accreditation, both ACR and IAC evaluate suppliers’ patient images (called “clinical images”) to ensure that images meet specific criteria, as recommended by 8 of the 11 organizations with imaging expertise. One of the 8, the American College of Cardiology, called the review of clinical images an essential component for assessing the capability of imaging equipment and the proficiency of staff in acquiring images. ACR and IAC also evaluate suppliers’ phantom images prior to granting accreditation, which are images of a solid object designed to mimic critical imaging characteristics of patients that are used for the assessment of certain performance parameters of imaging equipment, as recommended by 5 of the 11 organizations. One of the 5, the American Association of Physicists in Medicine, reported that phantom images permit more objective evaluations of ADI equipment performance and a standardized format against which the imaging performance of various facilities can be evaluated. Further, FDA- approved accrediting bodies are also required to review mammography suppliers’ clinical and phantom images, and we have reported with regard to mammography that evaluating phantom images is one of the most important processes for testing equipment. TJC does not systematically evaluate suppliers’ clinical or phantom images to ensure that images meet specific criteria, although TJC representatives reported assessing compliance with standards that require suppliers to identify and implement activities necessary to maintain the reliability, clarity, and accuracy of the technical quality of images. According to TJC representatives, health care services are provided in an environment that must be comprehensively assessed, and no single checklist can fulfill this. For example, they reported that evaluating an image does not reveal anything about the systems that support imaging safety such as the adequacy of safety checks, equipment maintenance, expertise of staff, and whether there is a primacy on patient and staff safety that permeates the facility’s culture and process. However, ADI suppliers have been delayed accreditation by ACR and IAC on the basis of problems with the quality of their clinical images, such as inadequate anatomic coverage or excessive artifacts. We and others have reported that quality problems with medical images can have serious consequences, such as missed or inaccurate diagnoses or inappropriate treatment.that can result from poor-quality images, there are currently no image review requirements or other national standards for ADI accreditation. CMS’s oversight efforts have focused primarily on ensuring that only accredited suppliers’ claims are paid; the agency does not have a systematic oversight process for other aspects of the ADI accreditation requirement. CMS has not developed a framework for evaluating accrediting organization performance, and its current guidance is insufficient to ensure that suppliers maintain compliance with standards for the duration of the accreditation cycle and to ensure that serious care problems are consistently identified and reported. CMS’s oversight efforts have primarily focused on ensuring that only accredited suppliers’ claims are paid. To ensure payment is made only to accredited suppliers, CMS officials told us that they require accrediting organizations to submit updated information about accredited suppliers on a weekly basis, including their national provider identifier (NPI), enrollment number, address, name, and dates of accreditation for each modality. They explained that these data are uploaded into the Medicare Provider Enrollment, Chain and Ownership System (PECOS)—CMS’s centralized database for Medicare provider enrollment information—and are matched against all claims submitted by ADI suppliers. If the NPI on a supplier’s claim does not match an accredited supplier listed in PECOS, the claim is denied. CMS officials told us that there were problems with accredited suppliers’ claims being denied when the accreditation requirement first went into effect because suppliers used an incorrect NPI; however, CMS officials and representatives from two of the accrediting organizations reported that these issues generally have been resolved. Although CMS is responsible for evaluating the performance of accrediting organizations, and CMS officials have indicated that its goal is to improve the quality of ADI services, it has not developed an oversight framework that would enable it to monitor and measure performance. A CMS official knowledgeable about the accreditation requirement stated that the requirement had been in effect for less than 1 year at the time of our review, and acknowledged that the agency’s oversight process was not as robust as it could be. This official reported that primary responsibility for oversight of the accreditation requirement was in the process of being transferred from CMS’s Center for Program Integrity to the Center for Clinical Standards and Quality. Although the accreditation requirement became effective January 1, 2012, it has been enacted into law since 2008 and CMS had selected accrediting organizations in January 2010, providing the agency with nearly 2 years to develop a plan for evaluating their performance before the effective date of the requirement. We found that as of January 2013, CMS had not yet established specific performance expectations or developed plans for conducting validation audits of accredited suppliers, which are one of the most effective techniques CMS has for collecting information about accrediting organization performance. Federal regulations provide for audits of a representative sample of accredited suppliers, which enable CMS to validate the processes used by approved accrediting organizations. These regulations also note that CMS may notify an accrediting organization of its intent to withdraw approval for an accrediting organization on the basis of the disparity between its findings and those of the respective accrediting organization. Further, in the absence of minimum national standards, it is unclear what measures CMS would use in its audits to validate the accreditation process and determine whether services provided by accredited ADI suppliers meet a sufficient level of quality and safety. In addition, CMS does not systematically collect or analyze readily available data to monitor accrediting organization performance. Collecting and analyzing information from accrediting organizations on accreditation results, such as the proportion of suppliers delayed accreditation and the types of care problems identified, could provide useful information about accrediting organization performance and help CMS ensure that accreditation is improving the quality and safety of ADI services. CMS does not systematically collect or analyze data on the proportion of suppliers that were not granted accreditation after the first attempt, and we found significant variation among accrediting organizations on the rates of these “delayed” accreditations. For calendar year 2012, IAC and ACR representatives reported that the proportion of CT suppliers delayed accreditation was 81 percent with IAC and 25 percent with ACR; likewise, the proportion of NM suppliers delayed accreditation was 60 percent with IAC and 4 percent with ACR.were due to actual variations in the quality of services provided by suppliers or to differences in approaches used by accrediting organizations to enforce compliance with their standards. Similarly, CMS does not define the care problems, or “deficiencies,” that may be identified by accrediting organizations that can result in delayed or denied accreditations, nor does it systematically collect information about or analyze the deficiencies identified. We found wide variation in the types of deficiencies most frequently identified by each accrediting organization during the accreditation process, which raises questions about whether organizations are consistently identifying care problems. For example, ACR most frequently identified problems with suppliers failing to submit required information, including clinical images of diagnostic quality; IAC most frequently identified problems with the interpretive reports written by physicians; and TJC most frequently identified problems on a wider range of issues, including problems with clinical privileges, equipment maintenance, medication management, infection control, and leadership. Although CMS requires accrediting organizations to conduct mid-cycle audits of accredited suppliers—including unannounced site visits—to help ensure they maintain compliance for the duration of the accreditation cycle, CMS does not specify minimum expectations for this task, such as the minimum number or percentage of audits required or the types of supplier activities that should be assessed during such audits. We found that the mid-cycle audits conducted by accrediting organizations varied in number and type. ACR conducted unannounced site visits for approximately 1 percent of its accredited suppliers in 2012, but ACR intends to increase this amount to approximately 15 percent in 2013. IAC representatives stated that they ensure that all accredited suppliers undergo at least one unannounced site visit or a performance audit— which requires accredited suppliers to submit specified documentation including clinical images, interpretive reports, and quality-improvement documentation—to ensure continued compliance with IAC standards over the 3-year accreditation period. TJC representatives stated that they conduct unannounced site visits for 2 percent of its accredited suppliers and also require all accredited suppliers to demonstrate ongoing compliance with TJC standards on an annual basis by having TJC conduct an on-site assessment or by means of electronic submission of an annual self-assessment. In contrast, federal regulations governing mammography accreditation specify the minimum number or percentage of on-site visits that should be conducted annually of accredited facilities to monitor ongoing compliance with standards and outline the activities that should be conducted during these visits. In addition, CMS guidance is not sufficient to ensure that accrediting organizations consistently identify and report serious care problems that pose immediate jeopardy to Medicare beneficiaries or suppliers’ staff. CMS developed a definition of immediate jeopardy, but did not provide specific examples of the types of problems that pose an immediate health risk for ADI services. We found a difference of opinion among the accrediting organizations about the sufficiency of CMS’s guidance. Representatives from TJC stated that CMS’s guidance was clear, while ACR and IAC stated that the definition was too broad and stated that additional guidance is needed on the types of activities that constitute immediate jeopardy to either Medicare beneficiaries or suppliers’ staff. We also found a difference of opinion about the types of activities that could constitute immediate jeopardy. For example, ACR reported that identifying metallic objects in the MRI suite would definitely constitute immediate jeopardy, whereas TJC told us that this could constitute immediate jeopardy if it was related to other pervasive lapses in safety. ACR representatives stated that without more specific guidance, CMS relies on accrediting organizations to determine what constitutes immediate jeopardy, and noted that FDA’s guidance on this topic for mammography accreditation is more helpful. Although federal regulations require the accrediting organizations to report immediate-jeopardy deficiencies of accredited suppliers to CMS within 2 business days, CMS officials reported that none had been reported since the accreditation requirement went into effect. It is unclear whether CMS’s lack of guidance has contributed to the fact that no immediate-jeopardy deficiencies have been reported. For example, representatives from one accrediting organization reported that there were circumstances in which they may not report potential immediate jeopardy deficiencies to CMS because they were not certain of exactly what constituted immediate jeopardy. The MIPPA accreditation requirement is an important step in helping to ensure the safety and quality of imaging services. To meet the January 1, 2012, implementation date for MIPPA’s accreditation requirement, CMS focused its initial efforts on selecting accrediting organizations and ensuring that only accredited suppliers were paid. However, there are significant differences among the accrediting organizations, which arise from CMS’s lack of minimum national standards. As a result, important aspects of imaging, such as qualifications of technologists and medical directors and the quality of clinical images, are difficult for CMS to monitor and assess. CMS lacks an oversight framework for evaluating the performance of selected accrediting organizations, and lacks specific guidance to help ensure that a sufficient number or percentage of mid- cycle audits occurs and that the types of serious care problems that could constitute immediate jeopardy are clear to all accrediting organizations. To help ensure that ADI suppliers provide consistent, safe, and high- quality imaging to Medicare beneficiaries, we recommend that the Administrator of CMS take the following three actions: determine the content of and publish minimum national standards for the accreditation of ADI suppliers, which could include specific qualifications for supplier personnel and requiring accrediting organization review of clinical images; develop an oversight framework for evaluating accrediting organization performance, which could include collecting and analyzing information on accreditation results and conducting validation audits; and develop more specific requirements for accrediting organization mid- cycle audit procedures and clarify guidance on immediate-jeopardy deficiencies to ensure consistent identification and timely correction of serious care problems for the duration of accreditation. We provided a draft of this report to HHS and to the three CMS-approved accrediting organizations for comment. In its written response, reproduced in appendix I, HHS concurred with all of our recommendations and identified actions that the department and CMS officials plan to take to implement them. Specifically, HHS stated these actions would include facilitating discussions with stakeholders and national experts to gather feedback on national standards for accreditation of ADI suppliers; developing an oversight framework for evaluating accrediting developing more specific requirements for accrediting organizations’ review procedures and providing guidance and education on immediate-jeopardy deficiencies. The three accrediting organizations also reviewed and provided comments on a draft of this report. ACR and IAC concurred with the report’s findings and recommendations. IAC representatives also said that minimum standards for ADI accreditation should include a review of suppliers’ interpretive reports of patient images, in addition to the other standards identified in the report. In contrast, TJC disagreed with the report’s findings and methodology. A summary of TJC’s specific comments and our response follows. The three accrediting organizations also provided technical comments, which we incorporated as appropriate. TJC stated that the report’s methodology was flawed and that it provided an incomplete portrayal of the necessary components of an ADI accreditation program. TJC indicated that the 11 organizations from which we obtained information on standards focused only on imaging and did not include organizations that focus more broadly on quality and safety. As a result, TJC stated that the report excluded other factors that affect quality oversight and improvement, and indicated that we lacked data to analyze the effectiveness of the different approaches used by each of the three organizations. Our purpose was not to compare the effectiveness of the three ADI accreditation programs, but rather to assess the ADI standards currently in use and determine whether CMS has adequate assurance that all accredited suppliers meet a minimum level of quality and safety. Further, we did not intend to conduct a comprehensive evaluation of TJC’s overall accreditation program, which considers aspects of quality and safety that go beyond criteria outlined in MIPPA for imaging accreditation, such as examining whether a supplier creates and maintains a culture of safety and quality throughout the organization. Rather, because our study is focused on imaging in particular, we determined whether the three CMS-selected accrediting organizations use standards specific to imaging that were recommended by organizations with expertise in this area. TJC also questioned our threshold for presenting standards that were recommended by 5 of 11 of the organizations, indicating that this represented agreement from less than 50 percent of the organizations. Because the 11 organizations have expertise in different areas of imaging, not all organizations commented on all sections of the questionnaire we sent to them. For example, the American Board of Orthopaedic Surgery recommended standards related to the qualifications of medical directors, but not procedures to ensure that equipment meets performance specifications. As a result, it would not be reasonable or appropriate to expect consensus for all recommended standards, as some standards were outside of an organization’s area of expertise. We indicate in the report that the standards the 11 organizations identified do not represent the full range of possible standards for the accreditation of ADI suppliers, but rather provide a framework for comparing the standards used by the accrediting organizations selected by CMS. HHS has indicated that it plans to facilitate discussions with stakeholders and national experts to gather feedback on national standards for accreditation of ADI suppliers. Finally, TJC stated that the report places inordinate value on image accuracy and professional credentials. We discuss those aspects of imaging in the report because they were among the nine standards that were identified by at least 5 of the 11 organizations with imaging expertise. For example, 8 of the 11 organizations believe that examining clinical images is an important aspect of accreditation for ADI services, and it is unclear how problems with image quality can be detected without reviewing images. Similarly, TJC stated that we provided no data to show that phantom testing results in better image quality in practice. Phantom image testing was recommended by 5 of the 11 organizations with imaging expertise, and has been required by FDA for over a decade to test imaging conducted by mammogram facilities under MQSA. Further, phantom images provide a standardized format against which imaging performance of various suppliers can be evaluated; this is important given that factors outside of a supplier’s control, such as a patient’s weight or particular health conditions, can affect a supplier’s ability to produce high- quality images. While our report assessed the standards currently in use for ADI accreditation, it is ultimately CMS’s responsibility to determine the content of minimum national standards for ADI accreditation. This could include, for example, determining whether clinical image review and phantom testing should be required for ADI accreditation, a decision that could be informed by its planned discussions with stakeholders and national experts. We stand by our report and findings, and believe that by adopting our recommendations for minimum national standards, as HHS has stated it intends to do, CMS will significantly enhance its ability to ensure both imaging quality and patient safety. We are sending copies of this report to the Secretary of Health and Human Services and relevant congressional committees. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Phyllis Thorburn, Assistant Director; William Black; Kye Briesath; William A. Crafton; Beth Morrison; Jennifer Whitworth; and Rachael Wojnowicz made key contributions to this report. | MIPPA required that beginning January 1, 2012, suppliers that produce the images for ADI services, such as physician offices and independent diagnostic testing facilities, be accredited by an organization approved by CMS. MIPPA directed GAO to conduct a preliminary report on the accreditation requirement in 2013 and a final report in 2014. In this report, GAO assessed (1) CMSs standards for accreditation of ADI suppliers, and (2) CMSs oversight of the accreditation requirement. To assess CMSs standards and oversight, GAO reviewed CMS regulations related to MIPPA, interviewed and reviewed information from CMS and CMS-approved accrediting organizations, and reviewed information on recommended standards for ADI accreditation from 11 organizations with imaging expertise. The Centers for Medicare & Medicaid Services (CMS) did not establish minimum national standards for the accreditation of suppliers of advanced diagnostic imaging (ADI) services, which cover the production of images for computed tomography, magnetic resonance imaging, and nuclear medicine services. While CMS adopted the broad criteria from the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) for ADI accreditation, it relied on the three accrediting organizations it selected to establish their own standards for quality and safety. To establish a framework for assessing the ADI standards currently in use, GAO developed a list of nine standards based on recommendations from 11 organizations with imaging expertise from which GAO obtained information. Two of the three accrediting organizations that CMS selected use all nine standards, while the third organization uses six of the nine standards. For example, while two of the organizations evaluate suppliers' patient images, the third said that it instead assesses suppliers' compliance with other standards necessary to maintain image quality, such as those related to inspection and testing of imaging equipment. As a result of these significant differences among the accrediting organizations, which arise from the lack of minimum national standards, important aspects of imaging, such as qualifications of technologists and medical directors and the quality of clinical images, are difficult for CMS to monitor and assess. Nine of the 11 organizations with imaging expertise and representatives from all three accrediting organizations recommended that CMS adopt minimum national standards. CMS drafted standards in 2010, but did not publish them because the agency was focused on other priorities. CMS's current oversight for the accreditation requirement is limited, as the agency focused its initial oversight efforts on ensuring that claims were paid only to accredited suppliers. Although CMS is responsible for evaluating the performance of accrediting organizations, the agency has not developed an oversight framework that would enable it to monitor and measure performance. CMS has not established specific performance expectations or developed plans for the validation audits of accredited suppliers as described in its regulations. Our previous work has shown that such independent evaluations are one of the most effective techniques CMS has to collect information about whether serious deficiencies are being identified. In addition, CMS's guidance to accrediting organizations on mid-cycle audits and serious care problems is limited. For example, CMS requires accrediting organizations to conduct mid-cycle audits to help ensure accredited suppliers maintain compliance for the 3-year accreditation cycle, but did not specify minimum expectations for this task, such as the minimum number or percentage of audits required or the types of supplier activities that should be assessed. In addition, two of the three accrediting organizations reported that CMS's guidance on identifying and reporting deficiencies that pose immediate jeopardy to Medicare beneficiaries or suppliers' staff was unclear. A CMS official stated that the accreditation requirement had been in operation for less than 1 year at the time of GAO's review, and reported that responsibility for oversight of the accreditation requirement was in the process of being transferred to another group within the agency. To help ensure that ADI suppliers provide safe and high-quality imaging to Medicare beneficiaries, GAO recommends that the Administrator of CMS determine the content of and publish minimum national standards for the accreditation of ADI suppliers; develop an oversight framework for evaluating accrediting organization performance; and develop more specific requirements for accrediting organization audits and clarify guidance on immediate-jeopardy deficiencies. The Department of Health and Human Services, which oversees CMS, concurred with GAOs recommendations. |
The Aviation and Transportation Security Act established TSA as the federal agency with primary responsibility for securing the nation’s civil aviation system, which includes the screening of all passengers and property transported by commercial passenger aircraft. At the more than 450 TSA-regulated airports in the United States, all passengers, their accessible property, and their checked baggage are screened prior to boarding an aircraft or entering the sterile area of an airport pursuant to statutory and regulatory requirements and TSA-established standard operating procedures. BDA, and more specifically, the SPOT program, constitutes one of multiple layers of security implemented within TSA- regulated airports. According to TSA’s strategic plan and other program guidance for the BDA program released in December 2012, the goal of the agency’s behavior detection activities, including the SPOT program, is to identify high-risk passengers based on behavioral indicators that indicate “mal-intent.” For example, the strategic plan notes that in concert with other security measures, behavior detection activities “must be dedicated to finding individuals with the intent to do harm, as well as individuals with connections to terrorist networks that may be involved in criminal activity supporting terrorism.” TSA developed its primary behavior detection activity, the SPOT program, in 2003 as an added layer of security to identify potentially high- risk passengers through behavior observation and analysis techniques. The SPOT program’s standard operating procedures state that BDOs are to observe and visually assess passengers, primarily at passenger screening checkpoints, and identify those who display clusters of behaviors indicative of stress, fear, or deception. The SPOT procedures list a point system BDOs are to use to identify potentially high-risk passengers on the basis of behavioral and appearance indicators, as compared with baseline conditions where SPOT is being conducted. A team of two BDOs is to observe passengers as they proceed through the screening process. This process is depicted in figure 1. According to TSA, it takes a BDO less than 30 seconds to meaningfully observe an average passenger. If one or both BDOs observe that a passenger reaches a predetermined point threshold, the BDOs are to direct the passenger and any traveling companions to the second step of the SPOT process—SPOT referral screening. During SPOT referral screening, BDOs are to engage the passenger in casual conversation—a voluntary informal interview—in the checkpoint area or a predetermined operational area in an attempt to determine the reason for the passenger’s behaviors and either confirm or dispel the observed behaviors. SPOT referral screening also involves a physical search of the passenger and his or her belongings. According to TSA, an average SPOT referral takes 13 minutes to complete. If the BDOs concur that a passenger’s behavior escalates further during the referral screening or if other events occur, such as the discovery of fraudulent identification documents or suspected serious prohibited or illegal items, they are to call a LEO to conduct additional screening—known as a LEO referral— who then may allow the passenger to proceed on the flight, or may question, detain, or arrest the passenger. The federal security director or designee, regardless of whether a LEO responds, is responsible for reviewing the circumstances surrounding a LEO referral and making the determination about whether the passenger can proceed into the sterile area of the airport. The costs of the SPOT program are not broken out as a single line item in the budget. Rather, SPOT program costs are funded through three separate program, project, activity (PPA)-level accounts: (1) BDO payroll costs are funded through the Screener Personnel Compensation and Benefits (PC&B) PPA, (2) the operating expenses of the BDOs and the program are funded through the Screener Training and Other PPA, and (3) the program management payroll costs are funded through the Airport Management and Support PPA. From fiscal year 2007—when the SPOT program began deployment nationwide—through fiscal year 2012, about $900 million has been expended on the program, as shown in figure 2. The majority of the funding (approximately 79 percent) for the SPOT program covers workforce costs and is provided under the Screener Personnel Compensation and Benefits PPA. This PPA—for which TSA requested about $3 billion for fiscal year 2014—funds, among other TSA screening activities, BDOs and TSO screening of passengers and their property. The workforce of about 3,000 BDOs is broken into four separate pay bands. The F Band, or Master BDO, and the G Band, or Expert BDO, constitute the primary BDO workforce that screens passengers using behavior detection. The H and I bands are supervisory-level BDOs, responsible for overseeing SPOT operations at the airport level. According to TSA figures, in fiscal year 2012, the average salaries and benefits of an F Band BDO full-time equivalent (FTE) was $66,310; a G Band BDO was $78,162, and the average FTE cost of H and I Band BDO supervisors was $97,392. In 2007, S&T began research to assess the validity of the SPOT program. The contracted study, issued in April 2011, was to examine the extent to which using the SPOT referral report and its indicators, as established in SPOT procedures, led to correct screening decisions at security checkpoints. Two primary studies were designed within the broader validation study: 1. an indicator study: an analysis of the behavioral and appearance indicators recorded in SPOT referral reports over an approximate 5- year period and their relationships to outcomes indicating a possible threat or high-risk passenger, and 2. a comparison study: an analysis over an 11-month period at 43 airports that compared arrests and other outcomes for passengers selected using the SPOT referral report with passengers selected and screened at random, as shown in table 1. The validation study found, among other things, that some SPOT indicators appeared to be predictors of outcomes indicating a possible threat or high-risk passenger, and that SPOT procedures were more effective than a selection of passengers through a random protocol in identifying outcomes that represent high-risk passengers. While the validation study was being finalized, DHS convened a TAC composed of 12 researchers and law enforcement professionals who met for 1 day in February 2011 to evaluate the methodology of the SPOT validation study. According to the TAC report, TAC members received briefings from the contractor that described the study plans and results, but because of TSA’s security concerns, TAC members did not receive detailed information about the contents of the SPOT referral report, the individual indicators used in the SPOT program, the validation study data, or the final report containing complete details of the SPOT validation study results. The TAC report noted that several TAC members felt that these restrictions hampered their ability to perform their assigned tasks. According to TSA, TAC members were charged with evaluating the methodology of the study, not the contents of the SPOT referral report. Consequently, TSA officials determined that access to this information was not necessary for the TAC to fulfill its responsibilities. S&T also contracted with another contractor, a human resources research organization, to both participate as TAC members and write a report summarizing the TAC meeting and subsequent discussions among the TAC members. In June 2011, S&T issued the TAC report, which contained TAC recommendations on future work as well as an appendix on TAC dissenting opinions. The findings of the TAC report are discussed later in this report. Meta-analyses and other published research studies we reviewed do not support whether nonverbal behavioral indicators can be used to reliably identify deception. While the April 2011 SPOT validation study was a useful initial step and, in part, addressed issues raised in our May 2010 report, it does not demonstrate the effectiveness of the SPOT indicators because of methodological weaknesses in the study. Further, TSA program officials and BDOs we interviewed agree that some of the behavioral indicators used to identify passengers for additional screening are subjective. TSA has plans to study whether behavioral indicators can be reliably interpreted, and variation in referral rates raises questions about the use of the indicators by BDOs. Peer-reviewed, published research does not support whether the use of nonverbal behavioral indicators by human observers can accurately identify deception. Our review of meta-analyses and other studies related to detecting deception conducted over the past 60 years, and interviews with experts in the field, question the use of behavior observation techniques, that is, human observation unaided by technology, as a means for reliably detecting deception. The meta- analyses, or reviews that synthesize the findings of other studies, we reviewed collectively included research from more than 400 separate studies on detecting deception, and found that the ability of human observers to accurately identify deceptive behavior based on behavioral cues or indicators is the same as or slightly better than chance (54 percent). A 2011 meta-analysis showed weak correlations between most behavioral cues studied and deception. For example, the meta- analysis showed weak correlations for behavioral cues that have been studied the most, such as fidgeting, postural shifts, and lack of eye contact. A 2006 meta-analysis reviewed, in part, the ability of both individuals trained in fields such as law enforcement, as well as those untrained, and found no difference in their ability to detect deception. Additionally, a 2007 meta-analysis on nonverbal indicators of deception states that while there is a general belief that certain nonverbal behaviors are strongly associated with deception—such as an increase in hand, foot, and leg movements—these behaviors are diametrically opposed to observed indicators of deception in experimental studies, which indicate that movements actually decrease when people are lying. As part of our analysis, we also reviewed scientific research focused on detecting passenger deception in an airport environment. We identified a 2010 study–based on a small sample size of passengers–that reviewed a similar behavior observation program in another country. The first phase of the study found that passengers who were selected based on behaviors were more likely to be referred to airport security officials for further questioning as compared to passengers who had been selected according to a random selection protocol. However, because the physical attributes of the passengers were found to be significantly different between those passengers selected based on behaviors versus those randomly selected, the researchers undertook a second phase of the study to control for those differences. The second phase revealed no differences in initial follow up rate between passengers selected based on behaviors and those matched for physical attributes. That is, when the control group was matched by physical attribute to passengers selected on the basis of behaviors, the follow up rate was the same. The researchers concluded that the higher number of passengers selected based on behaviors and referred for further questioning during the first phase of the study “was more the result of profiling” than the use of behavior observation techniques. As mentioned earlier in this report, the goal of the BDA program is to identify high-risk passengers based on behavioral indicators that may indicate mal-intent. However, other studies we reviewed found that there is little available research regarding the use of behavioral indicators to determine mal-intent, or deception related to an individual’s intentions. For example, a 2013 RAND report noted that controversy exists regarding the use of human observation techniques that use behavioral indicators to identify individuals with intent to deceive security officials. In particular, the study noted that while behavioral science has identified nonverbal behaviors associated with emotional and psychological states, these indicators are subject to certain factors, such as individual variability, that limit their potential utility in detecting pre-incident indicators of attack. The RAND report also found that the techniques for measuring the potential of using behavioral indicators to detect attacks are poorly developed and worthy of further study. Moreover, a 2008 study performed for the Department of Defense by the JASON Program Office reviewed behavior detection programs, including the methods used by the SPOT program, and found that no compelling evidence exists to support remote observation of physiological signals that may indicate fear or nervousness in an operational scenario by human observers, and no scientific evidence exists to support the use of these signals in detecting or inferring future behavior or intent. In particular, the report stated that success in identifying deception and intent in other studies is post hoc and such studies incorrectly equate success in identifying terrorists with the identification of drug smugglers, warrant violators, or others. For example, when describing the techniques used by BDOs in the SPOT program, the report concluded that even if a correlation were found between abnormal behaviors and guilt as a result of some transgression, there is no clear indication that the guilt caused the abnormal behavior. The report also noted that the determination that the abnormal behavior was caused by guilt was made after the fact, rather than being based on established criteria beforehand. Recent research on behavior detection has identified more promising results when behavioral indicators are used in combination with certain interview techniques and automated technologies, which are not used as part of the SPOT program. For example, several studies we reviewed that were published in 2012 and 2013 note that specific interviewing techniques, such as asking unanticipated questions, may assist in identifying deceptive individuals. Researchers began to develop automated technologies to detect deception, in part, because humans are limited in their ability to perceive, detect, and analyze all of the potentially useful information about an individual, some of which otherwise would not be noticed by the naked eye. For example, the 2013 RAND report noted that the link between facial microexpressions—involuntary expressions of emotion appearing for milliseconds despite best efforts to dampen or hide them—and deception can be evidenced by coding emotional expressions from a frame-by-frame analysis of video. However, the study concludes that the technique is not suitable for use by humans in real time at checkpoints or other screening areas because of the time lag and hours of labor required for such analysis. Automated technologies are being explored by federal agencies in conjunction with academic researchers to overcome these limitations, as well as human fatigue factors and potential bias in trying to detect deception. Although in the early stages of development, the study stated that automated technologies might be effective at fusing multiple indicators, such as body movement, vocal stress, and facial microexpression analysis. The usefulness of DHS’s April 2011 validation study is limited, in part because the data the study used to examine the extent to which the SPOT behavioral indicators led to correct screening decisions at security checkpoints were from the SPOT database that we had previously found in May 2010 to have several weaknesses, and thus were potentially unreliable. The SPOT indicator study analyzed data collected from 2006 to 2010 to determine the extent to which the indicators could identify high- risk passengers defined as passengers who (1) possessed fraudulent documents, (2) possessed serious prohibited or illegal items, (3) were arrested by a LEO, or (4) any combination of the first three measures. The validation study reported that 14 of the 41 SPOT behavioral indicators were positively and significantly related to one or more of the study outcomes. However, in May 2010, we assessed the reliability of the SPOT database against Standards for Internal Control in the Federal Government and concluded that the SPOT database lacked controls to help ensure the completeness and accuracy of the data, such as computerized edit checks to review the format, existence, and reasonableness of data. We found, among other things, that BDOs could not record all behaviors observed in the SPOT database because the database limited entry to eight behaviors, six signs of deception, and four types of serious prohibited items per passenger referred for additional screening. BDOs are trained to identify 94 signs of stress, fear, and deception, or other related indicators. As a result, we determined that, as of May 2010, the data were not reliable enough to conduct a statistical analysis of the association between the indicators and high-risk passenger outcomes. In May 2010, we recommended that TSA make changes to ensure the quality of SPOT referral data, and TSA subsequently made changes to the SPOT database. However, the validation study used data that were collected from 2006 through 2010, prior to TSA’s improvements to the SPOT database. Consequently, the data were not sufficiently reliable for use in conducting a statistical analysis of the association between the indicators and high-risk passenger outcomes. In their report that reviewed the validation study, TAC members expressed some reservations about the methodology used in analyzing the SPOT indicators and suggested that the contractor responsible for completing the study consider not reporting on some of its results and moving the results to an appendix, rather than including them as a featured portion of the report. Further, the final validation study report findings were mixed, that is, they both supported and questioned the use of these indicators in the airport environment, and the report noted that the study was an “initial step” toward validating the program. However, because the study used unreliable data, its conclusions regarding the use of the SPOT behavioral indicators for passenger screening are questionable and do not support the conclusion that they can or cannot be used to identify threats to aviation security. Other aspects of the validation study are discussed later in this report. BDA officials at headquarters and BDOs we interviewed in four airports said that some of the behavioral indicators are subjective, and TSA has not demonstrated that BDOs can consistently interpret behavioral indicators, though the agency has efforts under way to reduce subjectivity in the interpretation by BDOs. For example, BDA officials at headquarters stated that the definition of some behaviors in SPOT standard operating procedures is subjective. Further, 21 of 25 BDOs we interviewed said that certain behaviors can be interpreted differently by different BDOs. SPOT procedures state that the behaviors should deviate from the environmental baseline. As a result, BDOs’ application of the definition of the behavioral indicators may change over time, or in response to external factors. Four of the 25 BDOs we spoke with said that newer BDOs might be more sensitive in applying the definition of certain behaviors. Our analysis of TSA’s SPOT referral data, discussed further below, shows that there is a statistically significant correlation between the length of time that an individual has been a BDO, and the number of SPOT referrals the individual makes per 160 hours worked, or about four 40-hour work weeks. This suggests that different levels of experience may be one reason why BDOs apply the behavioral indicators differently. BDA officials agree that some of the SPOT indicators are subjective, and the agency is working to better define the behavioral indicators currently used by BDOs. In December 2012, TSA initiated a new contract to review the indicators in an effort to reduce the number of behavioral and appearance indicators used and to reduce subjectivity in the interpretation by BDOs. In June 2013, the contractor produced a document that summarizes information on the SPOT behavioral indicators from the validation study analysis, such as how frequently the indicator was observed, that it says will be used in the indicator review process. According to TSA’s November 2012 performance metrics plan, in 2014, the agency also intends to complete an inter-rater reliability study. This study could help TSA determine whether BDOs can reliably interpret the behavioral indicators, which is a critical component of validating the SPOT program’s results and ensuring that the program is implemented consistently. Our analysis of SPOT referral data from fiscal years 2011 and 2012 indicates that SPOT and LEO referral rates vary significantly across BDOs at some airports, which raises questions about the use of behavioral indicators by BDOs. Specifically, we found that variation exists in the SPOT referral rates among 2,199 nonmanager BDOs and across the 49 airports in our review, after standardizing the referral data to take account of the differences in the amount of time each BDO spent observing passengers, as shown in figure 3. The SPOT referral rates of BDOs ranged from 0 to 26 referrals per 160 hours worked during the 2-year period we reviewed. Similarly, LEO referral rates of BDOs ranged from 0 to 8 per 160 hours worked. Further, at least 153 of the 2,199 nonmanager BDOs were never identified as the primary BDO responsible for a referral. Of these, at least 76 were not associated with a referral during the 2-year period we reviewed. To better understand the variation in referral rates, we analyzed whether certain variables affected SPOT referral rates and LEO referral rates, including the airport at which the referral occurred, and BDO characteristics, such as their annual performance scores, years of experience, as well as demographic information, including age and gender. The variables we identified as having a statistically significant relationship to the referral rates are shown in table 2. We found that overall, the greatest amount of the variation in SPOT referral rates by BDOs was explained by the airport in which the referral occurred. That is, a BDO’s SPOT referral rate was associated with the airport at which he or she was conducting SPOT activities. However, separate analyses we conducted indicate that these differences across airports were not fully accounted for by another variable that is directly related to individual airports. That variable accounted for less than half of the variation in SPOT referral rates accounted for by airports. Combined, the remaining variables–including BDO performance score, age, years of BDO experience, years of TSA experience, race, and educational level– accounted for little of the variation in SPOT referral rates. In commenting on this issue, TSA officials noted that variation in referral rates across airports could be the result of differences in passenger composition, the airport’s market type, the responsiveness of LEOs to BDO referrals, and the number and type of airlines at the airports, among other things. However, because TSA could not provide additional supporting data on these variables with comparable time frames, we were not able to include these variables in our analysis. See appendix IV for a more detailed discussion of the findings from our multivariate analysis of referral rates. According to TSA, having clearly defined and consistently implemented standard operating procedures for BDOs in the field at the 176 SPOT airports is key to the success of the program. In May 2010, we found that TSA established standardization teams designed to help ensure consistent implementation of the SPOT standard operating procedures. We followed up on TSA’s use of standardization teams and found that from 2012 to 2013, TSA made standardization team visits to 9 airports. In May 2012, officials changed their approach and data collection requirements and changed the name of the teams to program compliance assessment teams. From December 2012 through March 2013, TSA conducted pilot site visits to 3 airports to test and refine new compliance team protocols for data collection, which, among other things, involve more quantitative analysis of BDO performance. The pilot process was designed to help ensure that the program compliance assessment teams conduct standardized, on-site evaluations of BDOs’ compliance with the SPOT standard operating procedures in a way that is based on current policy and procedures. As of June 2013, TSA had visited and collected data at 6 additional airports and was refining data input and reporting processes. According to BDA officials, TSA deployed the new compliance teams nationally in August 2013 and anticipates visiting an additional 13 airports by the end of fiscal year 2013. However, the compliance teams are not generally designed to help ensure BDOs’ ability to consistently interpret the SPOT indicators, and the agency has not developed other mechanisms to measure inter-rater reliability. TSA does not have reasonable assurance that BDOs are reliably interpreting passengers’ behaviors within or among airports, in part because of the subjective interpretation of some SPOT behavioral indicators by BDOs and the limited scope of the compliance teams. This, coupled with the inconsistency in referral rates across different airports, raises questions about the use of behavioral indicators to identify potential threats to aviation. TSA has limited information to evaluate SPOT program effectiveness because the findings from the April 2011 validation comparison study are inconclusive because of methodological weaknesses in the study’s overall design and data collection. However, TSA plans to collect additional performance data to help it evaluate the effectiveness of its behavior detection activities. DHS’s 2011 validation study compared the effectiveness of SPOT with a random selection of passengers and found that SPOT was between 4 and 52 times more likely to correctly identify a high-risk passenger than random selection, depending on which of the study’s outcome measures was used to define persons knowingly and intentionally trying to defeat the security process. However, BDOs used various methods to randomly select passengers during data collection periods of differing length at the study airports. Initially, the contractor proposed that TSA use random selection methods at a sample of 143 SPOT airports, based on factors such as the number of airport passengers. If properly implemented, the proposed sample would have helped ensure that the validation study findings could be generalized to all SPOT airports. However, according to the study and interviews with the contractor, TSA selected a nonprobability sample of 43 airports based on input from local TSA airport officials who decided to participate in the study. TSA allowed the managers of these airports to decide which checkpoints would use random procedures and when they would do so during airport operating hours. According to the validation study and a contractor official, the airports included in the study were not randomly selected because of the increased time and effort it would take to collect study data at the 143 airports proposed by the contractor. Therefore, the study’s results may provide insights about the implementation of the SPOT program at the 43 airports where the study was carried out, but they are not generalizable to all 176 SPOT airports. Additionally, TSA collected the validation study data unevenly and experienced challenges in collecting an adequate sample size for the randomly selected passengers, facts that might have further affected the representativeness of the findings. According to established evaluation design practices, data collection should be sufficiently free of bias or other significant errors that could lead to inaccurate conclusions. Specifically, in December 2009, TSA initially began collecting data from 24 airports whose participation in the study was determined by the local TSA officials. More than 7 months later, TSA added another 18 airports to the study when it determined that enough data were not being collected on the randomly selected passengers at participating airports to reach the study’s required sample size. The addition of the airports coincided with a substantial increase in referrals for additional screening and an uneven collection of data, as shown in figure 4. As a result of this uneven data collection, study data on 61 percent of randomly selected passengers were collected during the 3-month period from July through September 2010. By comparison, 33 percent of the data on passengers selected by the SPOT program were collected during the same time. Because commercial aviation activity and the demographics of the traveling public are not constant throughout the year, this uneven data collection may have conflated the effect of random versus SPOT selection methods with differences in the rates of high-risk passengers when TSA used either method. In addition, the April 2011 validation study noted that BDOs were aware of whether the passengers they were screening were selected as a result of the random selection protocol or SPOT procedures, which had the potential to introduce bias in the assessment. According to established practices for evaluation design, when feasible, many scientific studies use “blind” designs, in which study participants do not know which procedures are being evaluated. This helps avoid potential bias due to the tendency of participants to behave or search for evidence in a manner that supports the effects they expect each procedure to have. In contrast, in the SPOT comparison study, BDOs knew whether each passenger they screened was selected through SPOT or random methods. This may have biased BDOs’ screening for high-risk passengers, because BDOs could have expected randomly selected passengers to be lower risk and thus made less effort to screen passengers. In interviews, the contractor and four of the eight members of the TAC we interviewed agreed that this may be a design weakness. One TAC member told us that the comparison study would have been more robust if the passengers had been randomly selected by people without any prior knowledge of SPOT indicators to decrease the possibility of bias. To reduce the possibility of bias in the study, another TAC member suggested that instead of using the same BDOs to select and screen passengers, some BDOs could have been responsible for selecting passengers and other BDOs for screening the passengers, regardless of whether they were selected randomly or by SPOT procedures. According to validation study training materials, BDOs were used to select both groups of passengers in an effort to maintain normal security coverage during the study. Another TAC member stated that controls were needed to ensure that BDOs gave the same level of scrutiny to randomly selected passengers as those referred because of their behaviors. The contractor officials reported that they were aware of the potential bias, and tried to mitigate its potential effects by training BDOs who participated in the validation study to screen passengers identically, regardless of how they were selected. However, the contractor stated that they could not fully control these selections because BDOs were expected to conduct their regular SPOT duties concurrently during the study’s data collection on random passenger screening. The validation study discussed several limitations that had the potential to introduce bias, but concluded that they did not affect the results of the study. Our analysis of the validation study data regarding one of the primary high-risk outcome measures—LEO arrests—suggests that the screening process was different for passengers depending on whether they were selected using SPOT procedures or the random selection protocol. Therefore, the study’s finding that SPOT was much more likely to identify high-risk passengers who were ultimately arrested by a LEO may be considerably inflated. Specifically, a necessary condition influencing the rate of the arrest outcome measure—exposure to a LEO through a LEO referral—was not equal in the two groups. The difference between the groups occurred because randomly selected passengers were likely to begin the SPOT referral process with zero points or very few points, whereas passengers selected on the basis of SPOT began the process at the higher, established point threshold required for BDOs to make a SPOT referral. However, because the point threshold for a LEO referral was the same for both groups, the likelihood that passengers selected using SPOT would escalate to the next point threshold, resulting in a LEO referral and possible LEO arrest, was greater than for passengers selected randomly. Our analysis showed that because of the discrepancy in the points accrued prior to the start of the referral process, passengers who were selected on the basis of SPOT behavioral indicators were more likely to be referred to a LEO than randomly selected passengers. Our analysis indicates that the validation study design could have been improved by treating each group similarly, regardless of the passengers’ accumulated points. For example, as a possible approach, both groups could have been referred to LEOs only in the cases where BDOs discovered a serious prohibited or illegal item. Established study design practices state that identifying key factors known to influence desired evaluation outcomes will aid in forming treatment and comparison groups that are as similar as possible, thus strengthening the analyses’ conclusions. Additionally, once referred to a LEO, passengers selected at random were arrested for different reasons than those selected on the basis of SPOT indicators, which suggests that the two groups of passengers were subjected to different types of screening. All randomly selected passengers who were identified as high risk, referred to a LEO, and ultimately arrested possessed fraudulent documents or serious prohibited or illegal items. In contrast, most of the passengers arrested after having been referred on the basis of SPOT behavior indicators were arrested for reasons other than fraudulent documents or serious prohibited or illegal items. These reasons for arrest included outstanding warrants by law enforcement agencies, public intoxication, suspected illegal entry into the United States, and disorderly conduct. Such differences in the reasons for arrest suggest that referral screening methods may have varied according to the method of selection for screening, consistent with the concerns of the TAC members and the contractor. Thus, because randomly selected passengers were assigned points differently during screening and consequently referred to LEOs far less than those referred by SPOT, and because being referred to a LEO is a necessary condition for an arrest, the results related to the LEO arrest metric are questionable and cannot be relied upon to demonstrate SPOT program effectiveness. To help ensure that all of the BDOs carried out the comparison study as intended, protocols for randomly selecting passengers were established that would help ensure that the methods would be the same across airports. The contractor emphasized that deviating from the prescribed protocol could increase the likelihood of introducing systematic differences across airports in the methods of random screening, which could bias the results. To ensure that airports and BDOs followed the study protocols, the contractor conducted monitoring visits at 17 of the 43, or 40 percent, of participating airports. The first monitoring visits occurred 6 months after data collection began, and 9 of the 17 airports were not visited until the last 2 months of the study, as shown in figure 5. Consequently, for 9 of these airports, the contractor could not have addressed the deviations from the protocols that were identified during the data-monitoring visits until the last weeks of data collection. In the April 2011 report of all 17 monitoring visits that were conducted, the most crucial issue the contractor identified was that BDOs deviated from the random selection protocol in ways that did not meet the criteria for systematic random selection. For example, the contractor found that across airports, local TSA officials had independently decided to exclude certain types of passengers from the study because the airport officials felt it was unreasonable to subject these types of passengers to referral screening. At 1 airport visited less than 4 weeks before data collection ended, BDOs misunderstood the protocols and incorrectly excluded a certain type of passenger. As a result, certain groups of potentially lower-risk passengers were systematically excluded from the population eligible for random selection. In addition, the contractor found that some BDOs used their own methods to select passengers, rather than the random selection protocol that was specified. The contractor reported that if left uncorrected, this deviation from the protocols could increase the likelihood of introducing systematic bias into the study. For example, at one airport visited less than 6 weeks before data collection ended, BDOs selected passengers by attempting to generate numbers they thought were random by calling out numbers spontaneously, such as “seven,” and using the numbers to select the seventh passenger, instead of following the random selection protocol. At another airport visited less than 6 weeks before data collection ended, contrary to random selection protocols, BDOs, rather than the data collection coordinator, selected passengers to undergo referral screening. Although deviations from the protocol may not have produced a biased sample, any deviation from the selection protocol suggests that BDOs’ judgment may have affected the random selection and screening processes in the comparison study. In addition to the limitations cited above, the April 2011 validation study noted other limitations such as the limited data useful for measuring high- risk passenger outcomes, the lack of information on the specific location within the airport where each SPOT indicator was first observed, and difficulties in differentiating whether passengers were referred because of observed behaviors related to elevated indicators of stress, fear, and deception, or for other reasons. The validation study concluded that further research to fully validate and evaluate the SPOT program was warranted. Similarly, the TAC report cited TAC members’ concerns that the validation study results “could be easily misinterpreted given the limited scope of the study and the caveats to the data,” and that the “results should be presented as a first step in a broader evaluation process.” Thus, limitations in the study’s design and in monitoring how it was implemented at airports could have affected the accuracy of the study’s conclusions, and limited their usefulness in determining the effectiveness of the SPOT program. As a result, the incidence of high-risk passengers in the normal passenger population remains unknown, and the incidence of high-risk passengers identified by random selection cannot be compared with the incidence of those identified using SPOT methods. TSA plans to collect and analyze additional performance data needed to assess the effectiveness of its behavior detection activities. In response to recommendations we made in May 2010 to conduct a cost-benefit analysis and a risk assessment, TSA completed two analyses of the BDA program in December 2012, but needs to complete additional analysis to fully address our recommendations. Specifically, TSA completed a return-on-investment analysis and a risk-based allocation analysis, both of which were designed in part to inform the future direction of the agency’s behavior detection activities, including the SPOT program. The return-on-investment analysis assessed the additional value that BDOs add to TSA’s checkpoint screening system, and concluded that BDOs provide an integral value to the checkpoint screening process. However, the report did not fully support its assumptions related to the threat frequency or the direct and indirect consequence of a successful attack, as is recommended by best practices. For example, TSA officials told us that the threat and consequence assumptions in the analysis were designed to be consistent with the 2013 Transportation Security System Risk Assessment (TSSRA), but the analysis did not explain why a catastrophic event was the only relevant threat scenario considered when determining consequence. Additionally, the analysis relied on assumptions regarding the effectiveness of BDOs and other countermeasures that were based on questionable information. For example, the analysis relied on results reported in the April 2011 validation study—which, as discussed earlier, had several methodological limitations—as evidence of the effectiveness of BDOs. Further, a May 2013 DHS OIG report found that TSA could not accurately assess the effectiveness or evaluate the progress of the SPOT program because it had not developed a system of performance measures at the time of the OIG review. In response, TSA provided the OIG with a draft version of its performance metrics plan. This plan has since been finalized and is discussed further below. TSA’s risk-based allocation analysis found that an additional 584 BDO FTEs should be allocated to smaller airports in an effort to cover existing gaps in physical screening coverage and performance, an action that, if implemented, would result in an annual budgetary increase of approximately $42 million. One of the primary assumptions in the risk- based allocation analysis is related to the effectiveness of BDOs. For example, this analysis suggests that BDOs may be effective in identifying threats to aviation security where gaps exist in physical screening coverage and performance, including the use of walk-through metal detectors and advanced imaging technology machines. However, TSA has not evaluated the effectiveness of BDOs in comparison with these other screening methods. In response to an additional recommendation in our May 2010 report to develop a plan for outcome-based performance measures, TSA completed a performance metrics plan in November 2012, which details the performance measures required for TSA to determine whether the agency’s behavior detection activities are effective, and identifies the gaps that exist in its current data collection efforts. The plan defined an ideal set of 40 metrics within three major categories that BDA needs to collect to be able to understand and measure the performance of its behavior detection activities. TSA then identified the gaps in its current data collection efforts, such as, under the human factors subcategory, data on BDO fatigue levels and what staffing changes would need to be made to reduce the negative impact on BDO performance resulting from fatigue, as shown in figure 6. As of June 2013, TSA had collected some information for 18 of 40 metrics the plan identified. Once collected, the data identified by the plan may help support the completion of a more substantive return-on-investment analysis and risk-based allocation analysis, but according to TSA’s November 2012 plan, TSA is currently collecting little to none of the data required to assess the performance and security effectiveness of BDA or the SPOT program. For example, TSA does not currently collect data on the percentage of time a BDO is present at a checkpoint or other areas in the airport while it is open. Without this information, the assumptions contained in TSA’s risk-based allocation analysis cannot be validated. This analysis identified the existing BDO coverage level at the airports where SPOT was deployed in 2011, and based its recommendations for an additional 584 BDOs on this coverage level. In May 2013, TSA began to implement a new data collection system, BDO Efficiency and Accountability Metrics (BEAM), designed to track and analyze BDO daily operational data, including BDO locations and time spent performing different activities. According to BDA officials, this data will allow the agency to gain insight on how BDOs are utilized, and improve analysis of the SPOT program. The performance metrics plan may also provide other useful information in support of some of the other assumptions in TSA’s risk-based allocation analysis and return-on- investment analysis. For example, both analyses assumed that a BDO can meaningfully assess 450 passengers per hour, and that fatigue would degrade this rate over the course of a day. However, according to the performance metrics plan, TSA does not currently collect any of the information required to assess the number of passengers meaningfully assessed by BDOs, BDOs’ level of fatigue, or the impact that fatigue has on their performance. To address these and other deficiencies, the performance metrics plan identifies 22 initiatives that are under way or planned as of November 2012, including efforts discussed earlier in this report, such as the indicator study and efforts to improve the SPOT compliance teams, among others. For additional information about the metrics that will result from these initiatives, see appendix V. These data could help TSA assess the performance and security effectiveness of BDA and the SPOT program, and find ways to become more efficient with fewer resources in order to meet the federal government’s long-term fiscal challenges, as recommended by federal government efficiency initiatives. In lieu of these data, TSA uses arrest and LEO referral statistics to help track the program’s activities. Of the approximately 61,000 referrals made over the 2-year period at the 49 airports we analyzed, approximately 8,700 (14 percent) resulted in a referral to a LEO. Of these LEO referrals, 365 (4 percent) resulted in an arrest. The proportion of LEO referrals that resulted in an arrest (arrest ratio) could be an indicator of the potential relationship between the SPOT behavioral indicators and an arrest. As shown in figure 7, 99.4 percent of the passengers that were selected for referral screening—that is further questioning and inspection by a BDO—were not arrested. The percentage of passengers referred to LEOs that were arrested was about 4 percent; the other 96 percent of passengers referred to LEOs were not arrested. The SPOT database identifies 6 reasons for arrest, including (1) fraudulent documents, (2) illegal alien, (3) other, (4) outstanding warrants, (5) suspected drugs, and (6) undeclared currency. In February 2013, BDA officials said between 50 and 60 SPOT referrals were forwarded by the Federal Air Marshal Service to other law enforcement agencies for further investigation to identify potential ties to terrorism. For example, TSA provided documentation of three suspicious incident reports from 2011 of passengers who were referred by BDOs to LEOs based on behavioral indicators, and who were later found to be in possession of large sums of U.S. currency. According to a FAMS report on these incident reports, the identification of large amounts of currency leaving the United States could be the first step in the disruption of funding for terrorist organizations or other form of criminal enterprise that may or may not be related to terrorism. TSA officials said it is difficult to identify the terrorism-related nexus in these referrals because they are rarely, if ever, informed on the outcomes of the investigations conducted by other law enforcement agencies, and thus have no way of knowing if these SPOT referrals were ultimately connected to terrorism- related activities or investigations. Standards for Internal Control in the Federal Government calls for agencies to report on the performance and effectiveness of their programs. However, according to the performance metrics plan, TSA will require at least an additional 3 years and additional resources before it can begin to report on the performance and security effectiveness of BDA or the SPOT program. Given the scope of the proposed activities and some of the challenges that TSA has faced in its earlier efforts to assess the SPOT program at the national level, to complete the activities in the time frames outlined in the plan would be difficult. In particular, the plan notes it is unrealistic that TSA will be able to evaluate the BDO security effectiveness contribution at each airport within the 3-year timeframe. According to best practices for program management of acquisitions, technologies should be demonstrated to work reliably in their intended environment prior to program deployment. Further, according to OMB guidance accompanying the fiscal year 2014 budget, it is incumbent upon agencies to use resources on programs that have been rigorously evaluated and determined to be effective, and to fix or eliminate those programs that have not demonstrated results. TSA has taken a positive step toward determining the effectiveness of BDA’s behavior detection activities by developing the performance metrics plan, as we recommended in May 2010. However, 10 years after the development of the SPOT program, TSA cannot demonstrate the effectiveness of its behavior detection activities. Until TSA can provide scientifically validated evidence demonstrating that behavioral indicators can be used to identify passengers who may pose a threat to aviation security, the agency risks funding activities that have not been determined to be effective. TSA has taken several positive steps to validate the scientific basis and strengthen program management of BDA and the SPOT program, which has been in place for over 6 years at a total cost of approximately $900 million since 2007. Nevertheless, TSA has not demonstrated that BDOs can consistently interpret the SPOT behavioral indicators, a fact that may contribute to varying passenger referral rates for additional screening. The subjectivity of the SPOT behavioral indicators and variation in BDO referral rates raise questions about the continued use of behavior indicators for detecting passengers who might pose a risk to aviation security. Furthermore, decades of peer-reviewed, published research on the complexities associated with detecting deception through human observation also draw into question the scientific underpinnings of TSA’s behavior detection activities. While DHS commissioned a 2011 study to help demonstrate the validity of its approach, the study’s findings cannot be used to demonstrate the effectiveness of SPOT because of methodological limitations in the study’s design and data collection. While TSA has several efforts under way to assess the behavioral indicators and expand its collection of data to develop performance metrics for its behavioral detection activities, these efforts are not expected to be completed for several years, and TSA has indicated that additional resources are needed to complete them. Consequently, after 10 years of implementing and testing the SPOT program, TSA cannot demonstrate that the agency’s behavior detection activities can reliably and effectively identify high-risk passengers who may pose a threat to the U.S. aviation system. To help ensure that security-related funding is directed to programs that have demonstrated their effectiveness, Congress should consider the findings in this report regarding the absence of scientifically validated evidence for using behavioral indicators to identify aviation security threats when assessing the potential benefits of behavior detection activities relative to their cost when making future funding decisions related to aviation security. To help ensure that security-related funding is directed to programs that have demonstrated their effectiveness, we recommend that the Secretary of Homeland Security direct the TSA Administrator to limit future funding support for the agency’s behavior detection activities until TSA can provide scientifically validated evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security. We provided a draft of this report to DHS and the Department of Justice (DOJ) for review and comment. We also provided excerpts of this report to subject matter experts for their review to ensure that the information in the report was current, correct, and factual. DOJ did not have any comments, and we incorporated technical comments from subject matter experts as appropriate. DHS provided written comments, which are printed in full in appendix VI, and technical comments, which we incorporated as appropriate. DHS did not concur with the recommendation to the Secretary of Homeland Security that directed the TSA Administrator to limit future funding support for the agency’s behavior detection activities until TSA can provide scientifically validated evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security. Citing concerns with the findings and conclusions, DHS identified two main areas where it disagreed with information presented in the report: (1) the findings related to the SPOT validation study and (2) the findings related to the research literature. Further, DHS provided information on its investigation of profiling allegations. We disagree with the statements DHS made in its letter, as discussed in more detail below. With regard to the findings related to the SPOT validation study, DHS stated in its letter that we used different statistical techniques when we replicated the analysis of SPOT indicators as presented in the DHS April 2011 validation study, a course of action that introduced error into our analysis and resulted in “misleading” conclusions. We disagree with this statement. As described in the report, we obtained the validation study dataset from the DHS contractor and replicated the analyses using the same techniques that the contractor used to conduct its analyses of SPOT indicators. As an extra step, in addition to replicating the approach (split-samples) used by the contractors, as described in appendixes II and III of this report, we extended those analyses using the full sample of referral data to increase our ability to detect significant associations. In both the replication of the study analyses and the extended analyses we conducted, we found essentially the same result in one aspect as the validation study—that some SPOT behavioral indicators were positively and significantly related to one or more of the outcome measures. Specifically, the validation study reported that 14 of the 41 SPOT behavioral indicators were positively and significantly related, and we found that 18 of the 41 behavioral indicators were positively and significantly related. However, the findings regarding negatively and significantly related SPOT indicators were not consistent between the analyses we conducted and the validation study. Specifically, we found that 20 of the 41 behavioral indicators were negatively and significantly related to one or more of the study outcomes (see app. II). That is, we identified 20 SPOT behavioral indicators that were more commonly associated with passengers who were not identified as high-risk passengers than with passengers who were identified as high-risk passengers. In other words, some of the SPOT indicators that behavior detection officers are trained to detect are associated with passengers who were defined by DHS as low risk. Our results were not consistent with the validation study, because the study did not report any indicators that were negatively and significantly correlated with one or more of the outcome measures. Further, because of limitations with the SPOT referral data that we reported in May 2010 and again in this report, the data the validation study used to examine behavioral indicators were not sufficiently reliable for use in conducting a statistical analysis of the association between the indicators and high-risk passenger outcomes. We did use these data in order to replicate the validation study findings. Further, DHS stated in its letter that the TAC agreed with the study’s conclusion that SPOT was substantially better at identifying high-risk passengers than a random screening protocol. However, we disagree with this statement. While the TAC report stated that TAC members had few methodological concerns with the way the contractor carried out its research, the members did not receive detailed information on the study, including the validation study data and the final report containing the SPOT validation study results. Specifically, as discussed in our report and cited in the TAC report, multiple TAC members had concerns about some of the conclusions in the validation study and suggested that the contractor responsible for completing the study consider not reporting on some of its results and moving the results to an appendix, rather than including them as a featured portion of the report. Moreover, since the TAC did not receive detailed information about the contents of the SPOT referral report, the individual indicators used in the SPOT program, the validation study data, or the final report containing complete details of the SPOT validation study results, the TAC did not have access to all of the information that we used in our analysis. As discussed in our report, the TAC report noted that several TAC members felt that this lack of information hampered their ability to perform their assigned tasks. Thus, we continue to believe that our conclusion related to the validation study results is valid, and contrary to DHS’s statement, we do not believe that the study provides useful data in understanding behavior detection. With regard to the findings related to the research literature, DHS stated in its letter that we did not consider all the research that was available and that S&T had conducted research—while not published in academic circles for peer review because of various security concerns—that supported the use of behavior detection. DHS also stated that research cited in the report “lacked ecological and external validity,” because it did not relate to the use of behavior detection in an airport security environment. We disagree. Specifically, as described in the report, we reviewed several documents on behavior detection research that S&T and TSA officials provided to us, including an unclassified and a classified literature review that S&T had commissioned. Further, after meetings in June and July 2013, S&T officials provided additional studies, which we reviewed and included in the report as applicable. We also included research in the report on the use of behavioral indicators that correspond closely to indicators identified in SPOT procedures as indicative of stress, fear, or deception. These studies, many of which were included in the meta-analyses we reviewed, were conducted in a variety of settings— including high-stakes situations where the consequences are great, such as a police interview with an accused murderer—and with different types of individuals—including law enforcement personnel. The meta-analyses we reviewed—which collectively included research from over 400 separate studies related to detecting deception conducted over the past 60 years—found that the ability of human observers to accurately identify deceptive behavior based on behavioral cues or indicators is the same as or slightly better than chance (54 percent). Further, in its letter, DHS cited a 2013 RAND report, which concluded that there is current value and unrealized potential for using behavioral indicators as part of a system to detect attacks. We acknowledge that behavior detection holds promise for use in certain circumstances and in conjunction with certain other technologies. However, the RAND report DHS cited in its letter refers to behavioral indicators that are defined and used significantly more broadly than those in the SPOT program. The indicators reviewed in the RAND report are neither used in the SPOT program, nor could be used in real time in an airport environment. Further, the RAND report findings cannot be used to support TSA’s use of behavior detection activities because the study stated that it could not make a determination of SPOT’s effectiveness because information on the program was not in the public domain. DHS also stated in its letter that it has several efforts under way to improve its behavior detection program and the methodologies used to evaluate it, including the optimization of its behavior detection procedures and plans to begin testing by the third quarter of fiscal year 2014 using robust test and evaluation methods similar to the operational testing conducted in support of technology acquisitions as part of its 3-year performance metrics plan. We are encouraged by TSA’s plans in this area. However, TSA did not provide supporting documentation accompanying these plans describing how it will incorporate robust data collection and authentication protocols, as discussed in DHS’s letter. Such documentation is to be completed prior to beginning any operational testing. These documents might include a test and evaluation master plan that would describe, among other things, the tests that needed to be conducted to determine system technical performance, operational effectiveness or suitability, and any limitations. Additionally, in its letter, DHS stated that the omission of research related to verbal indicators of deception was misleading because a large part of BDOs’ work is interacting with passengers and assessing whether passengers’ statements match their behaviors, or if the passengers’ trip stories are in agreement with their travel documents and accessible property. While BDOs’ interactions with passengers may elicit useful information, SPOT procedures indicate that casual conversation— voluntary informal interviews conducted by BDOs with passengers referred for additional screening—is conducted after the passengers have been selected for a SPOT referral, not as a basis for selecting the passengers for referral. Further, since these interviews are voluntary, passengers are under no obligation to respond to the BDOs questions, and thus information on passengers may not be systematically collected. As noted in our report, promising research on behavioral indicators cited in the RAND report and other literature is focused on using indicators in combination with automated technologies and certain interview techniques, such as asking unanticipated questions. However, when interviewing referred passengers for additional screening, BDOs do not currently have access to the automated technologies discussed in the RAND report. Further, DHS stated that the goal of the SPOT program is to identify individuals exhibiting behavior indicative of simple emotions such as fear or stress and reroute them to a higher level of screening, and does not attempt to specifically identify persons engaging in lying or terrorist acts. However, DHS also stated in its response that “SPOT uses a broader array of indicators, including stress and fear detection as they relate to high-stakes situations where the consequences are great, for example, suicide attack missions.” As noted in the report, TSA’s program and budget documents associated with behavior detection activities identify that the purpose of these activities is to identify high-risk passengers based on behavioral indicators that indicate mal-intent. For example, the strategic plan notes that in concert with other security measures, behavior detection activities “must be dedicated to finding individuals with the intent to do harm, as well as individuals with connections to terrorist networks that may be involved in criminal activity supporting terrorism.” The conclusions, which were confirmed in discussions with subject matter experts and an independent review of studies, indicate that scientifically validated evidence does not support whether the use of behavioral indicators by unaided human observers can be used to identify passengers who may pose a threat to aviation security. DHS also cited the National Research Council’s 2008 report to support its use of SPOT. The National Research Council report, which we reviewed as part of our 2010 review of the SPOT program, noted that behavior and appearance monitoring might be able to play a useful role in counterterrorism efforts but also stated that a scientific consensus does not exist regarding whether any behavioral surveillance or physiological monitoring techniques are ready for use in the counterterrorist context, given the present state of the science. According to the National Research Council report, an information-based program, such as a behavior detection program, should first determine if a scientific foundation exists and use scientifically valid criteria to evaluate its effectiveness before going forward. The report also stated that programs should have a sound experimental basis, and documentation on the program’s effectiveness should be reviewed by an independent entity capable of evaluating the supporting scientific evidence. With regard to information provided related to profiling, DHS stated that DHS’s OIG completed an investigation at the request of TSA into allegations that surfaced at Boston Logan Airport and concluded that these allegations could not be substantiated. However, while the OIG’s July 2013 report of investigation on behavior detection officers in Boston concluded that “there was no indication that BDOs racially profiled passengers in order to meet production quotas,” the OIG’s report also stated that there was evidence of “appearance profiling.” In stating its nonconcurrence with the recommendation to limit future funding in support of its behavior detection activities, DHS stated that TSA’s overall security program is composed of interrelated parts, and to disrupt one piece of the multilayered approach may have an adverse impact on other pieces. Further, DHS stated that the behavior detection program should continue to be funded at current levels to allow BDOs to screen passengers while the optimization process proceeds. We disagree. As noted in the report, TSA has not developed the performance measures that would allow it to assess the effectiveness of its behavior detection activities compared with other screening methods, such as physical screening. As a result, the impact of behavior detection activities on TSA’s overall security program is unknown. Further, not all screening methods are present at every airport, and TSA has modified the screening procedures and equipment used at airports over time. These modifications have included the discontinuance of screening equipment that was determined to be unneeded or ineffective. Therefore, we continue to believe that providing scientifically validated evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security is critical to the implementation of TSA’s behavior detection activities. Further, OMB guidance highlights the importance of using resources on programs that have been rigorously evaluated and determined to be effective, and best practices for program management of acquisitions state that technologies should be demonstrated to work reliably in their intended environment prior to program deployment. Consequently, we have added a matter for congressional consideration to this report to help ensure that TSA provides information, including scientifically validated evidence, which supports the continued use of its behavior detection activities in identifying threats to aviation security. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 5 days from the report date. We are sending copies of this report to the Secretary of Homeland Security; the TSA Administrator; the United States’ Attorney General; and interested congressional committees as appropriate. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix VII. According to the Screening of Passengers by Observation Techniques (SPOT) program’s standard operating procedures, behavior detection officers (BDO) must apply the SPOT behavioral indicators to passengers without regard to race, color, religion, national origin, ethnicity, sexual orientation, or disability. Since 2010, the Transportation Security Administration (TSA) and the Department of Homeland Security’s (DHS) Office of Inspector General (OIG) have examined allegations of the use of profiling related to the race, ethnicity, or nationality of passengers by behavior detection officers (BDO) at three airports—Newark Liberty International Airport (Newark), Honolulu International Airport (Honolulu), and Boston Logan International Airport (Boston)—and TSA has taken action to address these allegations. Specifically, in January 2010, TSA concluded an internal investigation at Newark of allegations that BDOs used specific criteria related to the race, ethnicity, or nationality of passengers in order to select and search those passengers more extensively than would have occurred without the use of these criteria. The investigation was conducted by a team of two BDO managers from Boston to determine whether two BDO managers at Newark had established quotas for SPOT referrals to evaluate the performance of their subordinate BDOs. The investigation also sought to determine whether these managers at Newark encouraged profiling of passengers in order to meet quotas that they had established. The investigating team concluded that no evidence existed to support the allegation of a quota system, but noted widespread BDO perception that higher referral rates led to promotion, and that the “overwhelming majority of BDOs” expressed concern that the BDO managers’ “focus was solely on increasing the number of referrals and LEO calls.” The investigating team said the information collected regarding the allegation of profiling resulted in a reasonable conclusion that that such activity was both directed and affected on a limited basis at Newark, based on one manager’s inappropriate direction to BDOs regarding profiling of passengers, racial comments, and the misuse of information intended for situational awareness purposes only. According to TSA officials, disciplinary action taken against this manager resulted in the manager’s firing. Additionally, in 2011, TSA’s Office of Inspection (OOI) conducted an investigation of racial profiling allegations against BDOs at Honolulu. The investigation consisted of a review of Equal Employment Opportunity (EEO) complaints, and OOI did not find evidence to support the profiling allegations in the SPOT program. In July 2012, OOI conducted a compliance inspection at Boston, during which allegations of profiling by BDOs surfaced. Specifically, during interviews with inspectors, allegations surfaced that BDOs were profiling passengers for the purpose of raising the number of law enforcement referrals. These accusations included written complaints from BDOs who claimed other BDOs were selecting passengers for referral screening based on their ethnic or racial appearance, rather than on the basis of the SPOT behavioral indicators and were reported in a September 2012 OOI memorandum. These allegations were referred to the OIG, and in August 2012, the OIG opened an investigation into these profiling allegations in Boston. According to OIG officials, its investigation was completed and its final report was provided to TSA in August 2013. In August 2012, the Secretary of Homeland Security issued a memorandum directing TSA to take a number of actions in response to allegations of racial profiling by BDOs. These actions include (1) a revision of the SPOT standard operating procedures to, among other things, clarify that passengers who are unwilling or uncomfortable with participating in an interactive discussion and responding to questions will not be pressured by BDOs to do so; (2) refresher training for all BDOs that reinforces antidiscrimination requirements; and (3) TSA communication with BDO supervisors that performance appraisals should not depend on achieving either a high number of referrals or on the arrest rate coming from those referrals, but rather from demonstrated vigilance and skill in applying the SPOT procedures. As of June 2013, TSA, together with the DHS Acting Officer for Civil Rights and Civil Liberties and Counsel to the Secretary of Homeland Security, had completed several of these action items and others were under way. For example, the Secretary of Homeland Security sent a memo to all DHS component heads in April 2013 stating that it is DHS’s policy to prohibit the consideration of race or ethnicity in DHS’s investigation, screening, and enforcement activities in all but the most exceptional instances. During our visits to four airports, we asked a random sample of 25 BDOs at the airports to what extent they had seen BDOs in their airport referring passengers based on race, national origin, or appearance rather than behaviors. These responses are not generalizable to the entire BDO population at SPOT airports. Of the 25 randomly selected BDOs we interviewed, 20 said they had not witnessed profiling, and 5 BDOs (including at least 1 from each of the four airports we visited) said that profiling was occurring at their airports, according to their personal observations. Also, 7 additional BDOs contacted us over the course of our review to express concern about the profiling of passengers that they had witnessed. We did not substantiate these specific claims. In an effort to further assess the race, sex, and national origin of passengers who were referred by BDOs for additional screening, we analyzed the available information in the SPOT referral database and the Federal Air Marshal Service’s (FAMS) Transportation Information Sharing System (TISS) database. However, we found that the SPOT referral database does not allow for the recording of information such as race or gender. Without recording these data for every referral, it is difficult to disprove or substantiate such accusations. Since program-wide data on race were not available in the SPOT database, we analyzed a subset of available arrest data that were entered into the TISS database, which allows for race to be recorded. However, because there is not a unique identifier to link referrals from the SPOT database to information entered into TISS, we experienced obstacles when we attempted to match the two databases. For the SPOT referrals we were able to match, we found that data on race were inconsistently recorded in TISS. The limitations associated with matching the two databases and the incompleteness of the race data in TISS made analyzing trends or anomalies in the data impractical. In March 2013, BDA officials stated that they had initiated a feasibility study to determine the efficacy of collecting data on the race and national origin of passengers referred by BDOs. A pilot is to be conducted at approximately five airports, which have not yet been selected, to collect data and examine whether this type of data collection is feasible and if the data can be used to identify any airport-specific or system-wide trends in referrals. According to BDA officials, the purpose of this study is to examine whether disparities exist in the referral trends, and if so, whether these differences suggest discrimination or bias in the referral process. This pilot is to also include an analysis of the broader demographics of the flying public—not just those referred by BDOs for additional screening—which is information that TSA had not previously collected. Having additional information on the characteristics of the flying public that may be used to compare to the characteristics of those passengers referred by the SPOT program—if TSA determines these data can feasibly be collected—could help enable TSA to reach reasonable conclusions about whether allegations of passenger profiling can be substantiated. The validation study reported that 14 of the 41 SPOT behavioral indicators were positively and significantly related to one or more of the study outcomes, but did not report that any of the indicators were negatively and significantly related to the outcome measures. That is, passengers exhibiting the SPOT behaviors that were positively and significantly related were more likely to be arrested, to possess fraudulent documents, or possess prohibited or illegal items. Conversely, passengers exhibiting the behaviors that were negatively and significantly related were less likely to be arrested, to possess fraudulent documents, or possess serious prohibited or illegal items than those who did not exhibit the behavior. While recognizing that the SPOT referral data used in this analysis were potentially unreliable, we replicated the SPOT indicator analysis with the full set of SPOT referral cases from January 1, 2006, to October 31, 2010, and found, consistent with the validation study, that 18 of the 41 behavioral indicators were positively and significantly related to one or more of the outcome measures. We also found, however, that 20 of the 41 behavioral indicators were negatively and significantly related to one or more of the study outcomes. That is, we identified 20 SPOT behavioral indicators that were more commonly associated with passengers who were not identified as high-risk passengers, than with passengers who were identified as high-risk passengers. Of the 41 behavioral indicators in the analysis, almost half of the passengers referred by BDOs for referral screening exhibited one indicator. This report addresses the following questions: 1. To what extent does available evidence support the use of behavioral indicators to identify aviation security threats? 2. To what extent does TSA have data necessary to assess the effectiveness of the SPOT program in identifying threats to aviation security? In addition, this report provides information on TSA’s response to recent allegations of racial profiling in the SPOT program, which can be found in appendix I. To obtain background information and identify changes in the SPOT program since our May 2010 report, we conducted a literature search to identify relevant reports, studies, and articles on passenger screening and deceptive behavior detection. We reviewed program documents in place during the period October 2010 through June 2013, including SPOT standard operating procedures, behavior detection officer performance standards and guidance, a strategic plan, and a performance metrics plan. We met with headquarters TSA and Behavior Detection and Analysis (BDA) program officials to determine the extent to which TSA had implemented recommendations in our May 2010 report and obtain an update on the SPOT program. In addition, we met with officials from U.S. Customs and Border Protection and the Federal Bureau of Investigation (FBI) Behavioral Science Unit to determine the extent to which they use behavior detection techniques. We also interviewed officials in DHS’s OIG, who were working on a related audit. We analyzed data for fiscal years 2011 and 2012 from TSA’s SPOT referral database, which is to record all incidents in which BDOs refer passengers for additional screening, including the airport, time and date of the referral, the names of the BDOs involved in the referral, BDOs’ observation of the passengers’ behaviors, and any actions taken by law enforcement officers, if applicable. We also analyzed data for fiscal years 2011 and 2012 from the FAMS Transportation Information Sharing System (TISS) database, which is a law enforcement database designed to retrieve, assess, and disseminate intelligence information regarding transportation security to FAMS and other federal, state, and local law enforcement agencies. We reviewed available documentation on these databases, such as user guides, data audit reports, and training materials, and interviewed individuals responsible for maintaining these systems. In addition, we analyzed data on BDOs working at airports during this 2-year period, such as date started at TSA, date started as BDO, race, gender, and performance rating scores from TSA’s Office of Human Capital, and data on the number of hours worked by these BDOs provided by TSA’s Office of Security Operations officials and drawn from the U.S. Department of Agriculture’s National Finance Center database, which handles payroll and personnel data for TSA and other federal agencies. Further, we analyzed financial data from fiscal years 2007 through 2012 provided by BDA to determine the expenditures associated with the SPOT program. Additional information about steps we took to assess the reliability of these data is discussed below. We interviewed BDA officials in the Office of Security Capabilities and the Office of Human Capital on the extent to which they collect and analyze these data. We conducted visits to four airports—Orlando International in Orlando, Florida; Detroit Metropolitan Wayne County in Detroit, Michigan; Logan International in Boston, Massachusetts; and John F. Kennedy International in New York City, New York. We selected this nonprobability sample based on the airports’ size and participation in behavior detection programs. As part of our visits, we interviewed a total of 25 BDOs using a semi-structured questionnaire, and their responses are not generalizable to the entire BDO population at SPOT airports. These BDOs were randomly selected from a list of BDOs on duty at the time of our visit. We interviewed BDO managers and TSA airport managers, such as federal security directors, who oversee the SPOT program at the airports. In addition, to obtain law enforcement officials’ perspectives on the SPOT program and their experiences in responding to SPOT referrals, we interviewed officials from the local airport law enforcement agency with jurisdiction at the four airports we visited (Orlando Police Department, Wayne County Airport Authority, Massachusetts State Police, and Port Authority of New York and New Jersey) and federal law enforcement officials assigned to the airports, including U.S. Customs and Border Protection, the FBI, and U.S. Immigration and Customs Enforcement. In nonprobability sampling, a sample is selected from knowledge of the population’s characteristics or from a subset of a population where some units in the population have no chance, or an unknown chance, of being selected. A nonprobability sample may be appropriate to provide illustrative examples, or to provide some information on a specific group within a population, but it cannot be used to make inferences about a population or generalize about the population from which the sample is taken. The results of our visits and interviews provided perspectives about the effectiveness of the SPOT program from local airport officials and opportunities to independently observe TSA’s behavior detection activities at airports, among other things. To assess the soundness of the methodology and conclusions in the DHS April 2011 validation study, we reviewed the validation study and Technical Advisory Committee (TAC) final reports and appendixes, and other documents, such as the contractor’s proposed study designs, contracts to conduct the study, data collection training materials, and interim reports on data monitoring visits and study results. We assessed these efforts with established practices in designing evaluations and generally accepted statistical principles. We obtained the validation study datasets from the contractor and replicated several of the analyses, based on the methodology described in the final report. Generally, we replicated the study’s split-sample analyses, and as an extra step, extended those analyses using the full sample of SPOT referral data, as discussed below and in appendix II. In addition, we interviewed headquarters TSA, BDA, and Science and Technology Directorate (S&T) officials responsible for the validation study, representatives from the contractor who conducted the study, and 8 of the 12 members of the TAC who commented on and evaluated the adequacy of the validation study and issued a separate report in June 2011. To assess the reliability of the SPOT referral data, we reviewed relevant documentation, including privacy impact assessments and a 2012 data audit of the SPOT database, and interviewed TSA and BDA headquarters and field officials about the controls in place to maintain the integrity of the data. To determine the extent to which the SPOT database is accurate and complete, we reviewed the data in accordance with established procedures for assessing data reliability and conducted tests, such as electronic tests to determine if there were anomalies in the dataset (such as out-of-range dates and missing data) and reviewed a sample of certain coded data fields and compared them with narrative information in the open text fields. We determined that the data for fiscal years 2011 and 2012 across the 49 airports in our scope were sufficiently reliable for us to use to reflect the total number of SPOT referrals and arrests made, and to standardize the referral and arrest data, based on the number of hours each BDO spent performing operational SPOT activities. In October 2012, TSA completed an audit of the data contained in the SPOT referral database in which it identified common errors, such as missing data fields and incorrect point totals. According to the 2012 audit, for the time period of March 1, 2010, through August 31, 2012, covering more than 108,000 referrals, the SPOT referral database had an overall error rate of 7.96 percent, which represented more than 8,600 known errors and more than 14,000 potential errors. According to TSA, the agency has begun taking steps to reduce this error rate, including visits to airports with significant data integrity issues and the development of a new SPOT referral database that is designed to prevent the most common errors from occurring. BDA officials told us that they have begun steps toward a nationwide rollout of their new system in May 2013, which includes pilots and developing procedures to mandate airports’ use of the system. On the basis of our review of the types of errors identified by the data audit, we determined that the SPOT referral data were sufficiently reliable for us to analyze BDO referral rates. However, the audit identifies problems with arrest data, which is one of the three categories of “potential errors.” The audit does not report on the magnitude of this error category, because identifying these errors requires a manual audit of the data at the airport level. As a result, we determined that the arrest data were not reliable enough for us to report on details about the arrests. To determine the extent to which available evidence exists to support the use of behavioral indicators to identify security threats, we analyzed research on behavioral indicators, reviewed the validation study findings on behavioral indicators, and analyzed SPOT referral data. Working from a literature review of articles from 2003 to 2013 that were identified using search terms such as “behavior detection deception,” and discussions with researchers who had published articles in this area, we contacted other researchers to interview and academic and government research to review. While the results of our interviews cannot be used to generalize about all research on behavior deception detection, they represent a mix of researchers and views by virtue of their affiliation with various academic institutions and governments, authorship of meta- analyses on these issues, and subject matter expertise in particular research areas. We also reviewed more than 40 articles and books on behavior-based deception detection dating from 1999 to 2013. These articles, books, and reports were identified by our literature search of databases, such as ArticleFirst, ECO, WorldCat, ProQuest, and Academic One File and recommendations by TSA and the experts we interviewed. Through our discussions and research, we identified four meta-analyses, which used an approach for statistically cumulating the results of several studies to answer questions about program impacts. These meta-analyses analyzed “effect sizes” across several studies—the measure of the difference in outcome between a treatment group and a comparison group. For example, these meta-analyses measured the accuracy of an individual’s deception judgments when assessing another individual’s credibility in terms of the percentage that lies and truths were correctly classified and the impact of various factors on the accuracy of deception judgments, such as the liar’s motivation or expertise of the individual making the judgment. We reviewed the methodologies of 4 meta-analyses covering over 400 separate studies on detection deception over a 60-year period, including whether an appropriate evaluation approach was selected for each meta-analysis, and whether the data were collected and analyzed in ways that allowed valid conclusions to be drawn, in accordance with established practices in evaluation design. In addition, we interviewed two authors of these meta-analyses to ensure that the analyses were sound and we determined that the analyses were sufficiently reliable for describing what evidence existed to support the use of behavioral indicators to identify security threats. We determined that the research we identified was sufficiently reliable for describing the evidence that existed regarding the use of behavioral indicators to identify security threats. Further, we reviewed documents developed by TSA and other foreign countries as part of an international study group to assess TSA’s efforts to identify best practices on the use of behavioral detection in an airport environment. To assess the soundness of the methodology and conclusions in the April 2011 validation study finding that 14 of the 41 SPOT indicators were related to outcomes that indicate a possible threat, we reviewed evidence supporting our May 2010 conclusions that the SPOT referral database lacked controls to help ensure the completeness and accuracy of the data. We interviewed TSA officials and obtained documentation, such as a data audit report and a functional requirements document, to determine the extent to which problems in the SPOT database were being addressed. We also reviewed the June 2011 TAC final report and interviewed contractor officials regarding analysis limitations because of data sparseness, or low frequency of occurrences of indicators in the SPOT database. We also obtained the dataset used in the study—SPOT referral data from January 2006 through October 2010—and replicated the SPOT indicator analyses described in the study. Although we found that the data were not sufficiently reliable for use in conducting a statistical analysis of the association between the indicators and high-risk passenger outcomes, we used the data to assess the study’s methodology and conclusions. The dataset included a total of 247,630 SPOT referrals from 175 airports. As described in the validation study, we calculated whether the odds on each of the four study outcome measures—LEO arrest, possession of fraudulent documents, possession of a serious prohibited or illegal item, or the combination of all three measures—were associated with the 41 SPOT indicators. These odd ratios were derived from four sets of 41 separate cross-tabulations—2 x 2 tables—in which each of the four outcomes is cross-classified by each of the 41 individual indicators. Odds ratios greater than 1.0 indicate positive associations, that is, passengers exhibiting the behavior were more likely to be arrested, to possess fraudulent documents, or to possess serious prohibited or illegal items. On the other hand, odds ratios of less than 1.0 indicate negative associations, that is, passengers exhibiting the behavior were less likely to be arrested, to possess fraudulent documents, or to possess serious prohibited or illegal items than those who do not exhibit the behavior. The number of positive and significant associations we detected was slightly larger than the number reported in the validation study mainly because we reported results from an analysis of the full sample of SPOT referrals—a total of 247,630 SPOT passenger referrals. In contrast, the validation study stated that a split-sample approach was used, in which each years’ dataset was split into two stratified random subsets across the years and analyses were conducted independently on each aggregated subset. The validation study stated that this approach allowed an examination of the extent to which results may vary across each subset and to address possible random associations in the data. The validation study further stated that this was important because changes in the SPOT program, such as fewer airports and BDOs involved in the earlier years and small changes to the SPOT instrument in March 2009, could have affected the analyses. However, after replicating the split- sample approach, we determined that it was not the most appropriate one to use because it substantially diminished the power to detect significant associations in light of how infrequently referrals occurred. We report the results of our analyses of the full sample of SPOT referrals that indicate behavioral indicators that are positively and significantly related, as well as negatively and significantly related, in the behavioral indicator section of the report and in appendix II. To determine the extent to which SPOT referrals varied by BDOs across airports for fiscal years 2011 and 2012, we initially selected the 50 airports identified by TSA’s May 2012 Current Airports Threat Assessment report as having the highest probability of threat from terrorist attacks. We chose to limit the scope of our review to the top 50 airports because the majority of the BDOs are deployed to these airports; and they account for 68 percent of the passenger throughput, and 75 percent of SPOT referrals. To standardize the referral rates across airports, we calculated the number of SPOT referrals by individual BDOs and matched these BDOs by the number of hours that particular BDOs spent performing SPOT activities. San Francisco International Airport was in the initial selection of 50 airports; however, we excluded San Francisco International because the hourly data provided to us for San Francisco BDOs, who are managed by a screening contractor, were not comparable with the hourly data provided to us for TSA-managed BDOs. The scope of our analysis was then 49 SPOT airports. To calculate BDO hours spent performing SPOT activities, we analyzed BDO time and attendance data provided by TSA for fiscal years 2011 and 2012 from the U.S. Department of Agriculture’s National Finance Center. We limited our analysis to the hours BDOs spent performing SPOT activities because it is primarily during these times that BDOs make SPOT referrals. Thus, BDO hours charged to activities such as leave, baggage screening, or cargo inspection activities were excluded. For example, we found that BDOs had charged time to cargo inspection activities that were unrelated to the SPOT program. These inspections are carried out under TSA’s Compliance Division in the Office of Security Operations, and are designed to ensure compliance with transportation security regulations. We also limited our analysis to nonmanager BDOs, as managers are not regularly engaged in making referrals. Finally, about 55 BDOs, or about 2 percent of the approximately 2,400 BDOs (including both managers and nonmanagers), were not included in our analysis because we could not reconcile their names with time and attendance data after several attempts with TSA officials. We calculated average referral rates per 160 hours worked, or about 4 40-hour weeks, across 2,199 BDOs working at 49 airports, and a referral rate for each airport. To better understand the variation in referral rates, we conducted a multivariate analysis to determine whether certain variables affected SPOT referral rates and LEO referral rates, including airports at which BDOs worked during fiscal years 2011 and 2012; BDO annual performance scores for 2011 and 2012; years of experience with TSA and as a BDO; and demographic information on BDOs, such as age, gender, race, and highest educational level attained at the time of employment. Although multivariate methods do not allow us to establish that referral rates are causally related to the BDO characteristics we had information about, they allowed us to examine the associations between referral rates and the different specific BDOs while controlling for other BDO characteristics, including the airports in which the BDOs worked. Moreover, the methods we employed allowed us to determine whether the observed differences in the sample data were different more than by merely chance fluctuations. Our statistical models and estimates are sensitive to our choice of variables; thus, researchers testing different variables may find different results. See appendix IV for additional information on the results of our analyses. To determine the extent to which TSA has data necessary to assess the effectiveness of the SPOT program in identifying threats to aviation security, we reviewed the validation study’s findings comparing passengers selected by SPOT with randomly selected passengers, analyzed TSA plans and analyses designed to measure SPOT’s effectiveness, and analyzed data on SPOT referrals and LEO arrests. To assess the soundness of the methodology and conclusions in the April 2011 validation study findings that SPOT was more likely to identify high- risk passengers than a random selection of passengers, we assessed the study design and implementation against established practices for designing evaluations and generally accepted statistical principles. These practices include, for example, probability sample methods, data collection and monitoring procedures, and quasi-experimental design. We obtained the validation study datasets and replicated the study findings, based on the methodology described in the final report. Further, we analyzed the validation study data from December 1, 2009, to October 31, 2010, on passengers who were referred to a LEO and who were ultimately arrested. To the extent possible, we reviewed SPOT data to determine the reasons for the arrest and if there were differences between arrested passengers who were referred by SPOT and arrested passengers who were randomly selected. To determine the extent to which TSA has plans to collect and analyze performance data to assess SPOT’s overall effectiveness, we reviewed TSA’s efforts to inform the future direction of BDA and the SPOT program, such as a return-on-investment and risk-based allocation analyses. We evaluated TSA’s efforts against DHS, GAO, and other guidance regarding these analyses. For example, we reviewed TSA’s return-on-investment analysis against the analytical standards in the Office of Management and Budget’s Circular A-94, which provides guidance on conducting benefit-cost and cost-effectiveness analyses. We also reviewed documentation associated with program oversight, including a 2012 performance metrics plan, and evaluated TSA’s efforts to collect and analyze data to provide oversight of BDA and the SPOT program against criteria in Office of Management and Budget guidance and Standards for Internal Control in the Federal Government. Further, we reviewed performance work statements in TSA contracts to determine the extent to which the contractor’s work is to fulfill the tasks in TSA’s performance metrics plan. Also, we reviewed FAMS law enforcement reports, TISS incident reports, and the SPOT referral database to determine the extent to which information from BDO referrals was used for further investigation to identify potential ties to terrorist investigations. We also analyzed SPOT referral data that TSA uses to track SPOT program activities, including the number of passengers who were referred to a LEO and ultimately arrested for fiscal years 2011 and 2012. To provide information about how TSA and DHS’s OIG have examined allegations of racial and other types of profiling of passengers by BDOs, we reviewed documentation from 2010 to 2013, such as investigation reports, privacy impact assessments, BDO training materials, and TSA memos. To explore the extent to which we could determine the race, gender, and national origin of passengers who were referred by BDOs for additional screening, we analyzed information in the SPOT referral database and the TISS database for fiscal years 2011 and 2012. We reviewed a September 2012 TSA contract that will, among other things, study whether any evidence exists for racial or ethnic profiling in the SPOT program. We also reviewed interim reports produced by the contractor as of June 2013. Because racial profiling allegations in Boston were made during the course of our review, we asked the random sample of 25 BDOs at the four airports we visited to what extent they had seen BDOs in their airport referring passengers based on race, national origin, or appearance rather than behaviors. These responses are not generalizable to the entire BDO population at SPOT airports. Further, 7 additional BDOs contacted us over the course of our review to express concern about the profiling of passengers that they had witnessed. We did not substantiate these specific claims. We also interviewed TSA headquarters and field officials, such as federal security directors and BDO managers, as well as DHS OIG officials. We conducted this performance audit from April 2012 to November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To better understand the variation in referral rates, we analyzed whether certain variables affected SPOT referral rates and LEO referral rates, including BDO characteristics, such as average performance scores for fiscal years 2011 and 2012, years of TSA and BDO experience, age, gender, educational level, years employed at TSA and as a BDO, and race, as well as the airport in which the BDOs worked. As described earlier, these analyses standardized SPOT referral data for 2,199 BDOs across 49 airports for fiscal years 2011 and 2012. The characteristics of the 2,199 BDOs in our analyses varied across different categories, as shown in table 3. About 51 percent of the BDOs were under 40 years of age, and slightly more than 25 percent were 50 years or older. Nearly 64 percent of the BDOs joined TSA before the end of 2005, but the majority, or more than 85 percent, became BDOs after the beginning of 2008. Nearly 65 percent of the BDOs were male. Fifty percent were white, about 26 percent were African-American, and about 18 percent were Hispanic. About 65 percent of the BDOs had a high school education or less. The BDOs were distributed unevenly across airports, with the largest numbers in Logan International (Boston), Dallas- Fort Worth International, John F. Kennedy International (New York), Los Angeles International, and O’Hare International (Chicago). Each BDO worked primarily in one airport during the 2-year period. For example, 80 of the 2,199 BDOs, or about 4 percent, worked in multiple airports and the remaining 2,119 BDOs, or 96 percent, worked at one airport during the 2- year time period. Overall, BDOs averaged about 1.57 SPOT referrals and 0.22 LEO referrals per 160 hours worked. These rates vary across the different BDO categories. However, these differences should be considered cautiously, as differences that appear to exist across categories for one characteristic may be confounded with differences across others. For example, the apparent difference in referral rates between younger and older BDOs may be the result of younger BDOs working disproportionately in airports with higher referral rates. To better understand the effects of BDO characteristics, including the airports they worked in, on SPOT referral and LEO referral rates, we conducted simple regression analyses. Overall, the greatest amount of the variation in BDO SPOT referral rates was explained by the airport at which the referral occurred. That is, the BDO’s referral rate was associated substantially with the airport at which he or she was conducting SPOT activities. These analyses show the size and significance of regression coefficients, from ordinary least-squares regression models, which reflect the estimated differences in the average number of SPOT referrals and LEO referrals across categories of BDO, and across airports. BDOs in a few airports averaging significantly higher rates of referrals than BDOs in the referent category, and BDOs in most of the other airports averaging significantly lower LEO referral rates. Because they were less common, LEO referrals may have been more difficult to predict that SPOT referrals. Differences in the other BDO characteristics— multivariate model 1—collectively accounted for a small percentage of the variation in average LEO referral rates, while differences across airports accounted for a larger percentage. Separate analyses we conducted revealed that the sizeable and highly significant differences in SPOT referral rates and LEO referral rates across airports were not fully accounted for by differences in the number of passengers who pass through airport checkpoints. Table 4 shows TSA’s proposed performance metrics as detailed in appendix G in its Behavior Detection and Analysis performance metrics plan dated November 2012. Table 5 shows the validity, reliability, and frequency score TSA determined for each metric and the overall score for each metric subcategory, as detailed in appendix C of its performance metrics plan, dated November 2012. TSA’s performance metrics plan defines validity as the ability of the metric to measure BDO performance, reliability as the level of certainty that data are collected precisely with minimal possibility for subjectivity or gaming the system, and frequency as the level of difficulty in collecting the metric and whether the metric is collected at the ideal number of scheduled recurrences. In addition to the contact named above, David M. Bruno (Assistant Director); Charles W. Bausell, Jr.; Andrew M. Curry; Nancy K. Kawahara; Elizabeth B. Kowalewski; Susanna R. Kuebler; Thomas F. Lombardi; Grant M. Mallie; Amanda K. Miller; Linda S. Miller; Lara R. Miklozek; Douglas M. Sloane; and Jeff M. Tessin made key contributions to this report. | TSA began deploying the SPOT program in fiscal year 2007--and has since spent about $900 million--to identify persons who may pose a risk to aviation security through the observation of behavioral indicators. In May 2010, GAO concluded, among other things, that TSA deployed SPOT without validating its scientific basis and SPOT lacked performance measures. GAO was asked to update its assessment. This report addresses the extent to which (1) available evidence supports the use of behavioral indicators to identify aviation security threats and (2) TSA has the data necessary to assess the SPOT program's effectiveness. GAO analyzed fiscal year 2011 and 2012 SPOT program data. GAO visited four SPOT airports, chosen on the basis of size, among other things, and interviewed TSA officials and a nonprobability sample of 25 randomly selected BDOs. These results are not generalizable, but provided insights. Available evidence does not support whether behavioral indicators, which are used in the Transportation Security Administration's (TSA) Screening of Passengers by Observation Techniques (SPOT) program, can be used to identify persons who may pose a risk to aviation security. GAO reviewed four meta-analyses (reviews that analyze other studies and synthesize their findings) that included over 400 studies from the past 60 years and found that the human ability to accurately identify deceptive behavior based on behavioral indicators is the same as or slightly better than chance. Further, the Department of Homeland Security's (DHS) April 2011 study conducted to validate SPOT's behavioral indicators did not demonstrate their effectiveness because of study limitations, including the use of unreliable data. Twenty-one of the 25 behavior detection officers (BDO) GAO interviewed at four airports said that some behavioral indicators are subjective. TSA officials agree, and said they are working to better define them. GAO analyzed data from fiscal years 2011 and 2012 on the rates at which BDOs referred passengers for additional screening based on behavioral indicators and found that BDOs' referral rates varied significantly across airports, raising questions about the use of behavioral indicators by BDOs. To help ensure consistency, TSA officials said they deployed teams nationally to verify compliance with SPOT procedures in August 2013. However, these teams are not designed to help ensure BDOs consistently interpret SPOT indicators. TSA has limited information to evaluate SPOT's effectiveness, but plans to collect additional performance data. The April 2011 study found that SPOT was more likely to correctly identify outcomes representing a high-risk passenger--such as possession of a fraudulent document--than through a random selection process. However, the study results are inconclusive because of limitations in the design and data collection and cannot be used to demonstrate the effectiveness of SPOT. For example, TSA collected the study data unevenly. In December 2009, TSA began collecting data from 24 airports, added 1 airport after 3 months, and an additional 18 airports more than 7 months later when it determined that the airports were not collecting enough data to reach the study's required sample size. Since aviation activity and passenger demographics are not constant throughout the year, this uneven data collection may have conflated the effect of random versus SPOT selection methods. Further, BDOs knew if passengers they screened were selected using the random selection protocol or SPOT procedures, a fact that may have introduced bias into the study. TSA completed a performance metrics plan in November 2012 that details the performance measures required for TSA to determine whether its behavior detection activities are effective, as GAO recommended in May 2010. However, the plan notes that it will be 3 years before TSA can begin to report on the effectiveness of its behavior detection activities. Until TSA can provide scientifically validated evidence demonstrating that behavioral indicators can be used to identify passengers who may pose a threat to aviation security, the agency risks funding activities that have not been determined to be effective. This is a public version of a sensitive report that GAO issued in November 2013. Information that TSA deemed sensitive has been redacted. Congress should consider the absence of scientifically validated evidence for using behavioral indicators to identify threats to aviation security when assessing the potential benefits and cost in making future funding decisions for aviation security. GAO included this matter because DHS did not concur with GAOs recommendation that TSA limit future funding for these activities until it can provide such evidence, in part because DHS disagreed with GAOs analysis of indicators. GAO continues to believe the report findings and recommendation are valid. |
Recognizing the critical need to address the issue of nuclear waste disposal, the Congress enacted the Nuclear Waste Policy Act of 1982 to establish a comprehensive policy and program for the safe, permanent disposal of commercial spent fuel and other highly radioactive wastes in one or more mined geologic repositories. The act created the Office of Civilian Radioactive Waste Management within DOE to manage its nuclear waste program. Amendments to the act in 1987 directed DOE to investigate only the Yucca Mountain site. The Nuclear Waste Policy Act also set out important and complementary roles for other federal agencies: The Environmental Protection Agency (EPA) was required to establish health and safety standards for the disposal of wastes in repositories. EPA issued standards for the Yucca Mountain site in June 2001 that require a high probability of safety for at least 10,000 years. NRC is responsible for licensing and regulating repositories to ensure their compliance with EPA’s standards. One prerequisite to the secretary’s recommendation was obtaining NRC’s preliminary comments on the sufficiency of DOE’s site investigation for the purpose of a license application. NRC provided these comments on November 13, 2001. If the site is approved, then NRC, upon accepting a license application from DOE, has 3 to 4 years to review the application and decide whether to issue a license to construct, and then to operate, a repository at the site. The Nuclear Waste Technical Review Board (the board) reviews the technical and scientific validity of DOE’s activities associated with investigating the site and packaging and transporting wastes. The board must report its findings and recommendations to the Congress and the secretary of energy at least twice each year, but DOE is not required to implement these recommendations. DOE has designated the nuclear waste program, including the site investigation, as a “major” program that is subject to senior management’s attention and to its agencywide guidelines for managing such programs and projects. The guidelines require the development of a cost and schedule baseline, a system for managing changes to the baseline, and independent cost and schedule reviews. DOE is using a management contractor to carry out the work on the program. The contractor develops and maintains the baseline, but senior DOE managers must approve significant changes to cost or schedule estimates. In February 2001, DOE hired Bechtel SAIC Company, LLC (Bechtel), to manage the program and required the contractor to reassess the remaining technical work and the estimated schedule and cost to complete this work. DOE is not prepared to submit an acceptable license application to NRC within the statutory limits that would take effect if the site is approved. Specifically, DOE has entered into 293 agreements with NRC to gather and/or analyze additional technical information in preparation for a license application that NRC would accept. DOE is also continuing to address technical issues raised by the board. In September 2001, Bechtel concluded, after reassessing the remaining technical work, that DOE would not be ready to submit an acceptable license application to NRC until January 2006. Moreover, while a site recommendation and a license application are separate processes, DOE will need to use essentially the same data for both. Also, the act states that the president’s recommendation to the Congress is that he considers the site qualified for an application to NRC for a license. The president’s recommendation also triggers an express statutory time frame that requires DOE to submit a license application to NRC within about 5 to 8 months. The 293 agreements that DOE and NRC have negotiated address areas of study within the program where NRC’s staff has determined that DOE needs to collect more scientific data and/or improve its technical assessment of the data. According to NRC, as of March 4, 2002, DOE had satisfactorily completed work on 38 of these agreements and could resolve another 22 agreements by September 30 of this year. These 293 agreements generally relate to uncertainties about three aspects of the long-term performance of the proposed repository: (1) the expected lifetime of engineered barriers, particularly the waste containers; (2) the physical properties of the Yucca Mountain site; and (3) the supporting information for the mathematical models used to evaluate the performance of the planned repository at the site. The uncertainties related to engineered barriers revolve around the longevity of the waste containers that would be used to isolate the wastes. DOE currently expects that these containers would isolate the wastes from the environment for more than 10,000 years. Minimizing uncertainties about the container materials and the predicted performance of the waste containers over this long time period is especially critical because DOE’s estimates of the repository system’s performance depend heavily on the waste containers, in addition to the natural features of the site, to meet NRC’s licensing regulations and EPA’s health and safety standards. The uncertainties related to the physical characteristics of the site center on how the combination of heat, water, and chemical processes caused by the presence of nuclear waste in the repository would affect the flow of water through the repository. The NRC staff’s concerns about DOE’s mathematical models for assessing the performance of the repository primarily relate to validating the models; that is, presenting information to provide confidence that the models are valid for their intended use and verifying the information used in the models. Performance assessment is an analytical method that relies on computers to operate mathematical models to assess the performance of the repository against EPA’s health and safety standards, NRC’s licensing regulations, and DOE’s guidelines for determining if the Yucca Mountain site is suitable for a repository. DOE uses the data collected during site characterization activities to model how a repository’s natural and engineered features would perform at the site. According to DOE, the additional technical work surrounding the 293 agreements with NRC’s staff is an insignificant addition to the extensive amount of technical work already completed—including some 600 papers cited in one of its recently published reports and a substantial body of published analytic literature. DOE does not expect the results of the additional work to change its current performance assessment of a repository at Yucca Mountain. “lthough significant additional work is needed prior to the submission of a possible license application, we believe that agreements reached between DOE and NRC staff regarding the collection of additional information provide the basis for concluding that development of an acceptable license application is achievable.” The board has also consistently raised issues and concerns over DOE’s understanding of the expected lifetime of the waste containers, the significance of the uncertainties involved in the modeling of the scientific data, and the need for an evaluation and comparison of a repository design having a higher temperature with a design having a lower temperature. The board continues to reiterate these concerns in its reports. For example, in its most recent report to the Congress and the secretary of energy, issued on January 24, 2002, the board concluded that, when DOE’s technical and scientific work is taken as a whole, the technical basis for DOE’s repository performance estimates is “weak to moderate” at this time. The board added that gaps in data and basic understanding cause important uncertainties in the concepts and assumptions on which DOE’s performance estimates are now based; providing the board with limited confidence in current performance estimates generated by DOE performance assessment model. As recently as May 2001, DOE projected that it could submit a license application to NRC in 2003. It now appears, however, that DOE may not complete all of the additional technical work that it has agreed to do to prepare an acceptable license application until January 2006. In September 2001, Bechtel completed, at DOE’s direction, a detailed reassessment in an effort to reestablish a cost and schedule baseline. Bechtel estimated that DOE could complete the outstanding technical work agreed to with NRC and submit a license application in January 2006. This date, according to the contractor, was due to the cumulative effect of funding reductions in recent years that had produced a “…growing bow wave of incomplete work that is being pushed into the future.” Moreover, the contractor’s report said, the proposed schedule did not include any cost and schedule contingencies. The contractor’s estimate was based on guidance from DOE that, in part, directed the contractor to assume annual funding for the nuclear waste program of $410 million in fiscal year 2002, $455 million in fiscal year 2003, and $465 million in fiscal year 2004 and thereafter. DOE has not accepted this estimate because, according to program officials, the estimate would extend the date for submitting a license application too far into the future. Instead, DOE accepted only the fiscal year 2002 portion of Bechtel’s detailed work plan and directed the contractor to prepare a new plan for submitting a license application to NRC by December 2004. Under the Nuclear Waste Policy Act, DOE’s site characterization activities are to provide information necessary to evaluate the Yucca Mountain site’s suitability for submitting a license application to NRC for placing a repository at the site. In implementing the act, DOE’s guidelines provide that the site will be suitable as a waste repository if the site is likely to meet the radiation protection standards that NRC would use to reach a licensing decision on the proposed repository. Thus, as stated in the preamble (introduction) to DOE’s guidelines, DOE expects to use essentially the same data for the site recommendation and the license application. In addition, the act specifies that, having received a site recommendation from the secretary, the president shall submit a recommendation of the site to the Congress if the president considers the site qualified for a license application. Under the process laid out in the Nuclear Waste Policy Act, once the secretary makes a site recommendation, there is no time limit under which the president must act on the secretary’s recommendation. However, when the president recommended, on February 15, that the Congress approve the site, specific statutory time frames were triggered for the next steps in the process. Figure 1 shows the approximate statutory time needed between a site recommendation and submission of a license application and the additional time needed for DOE to meet the conditions for an acceptable license application. The figure assumes that Nevada disapproves the site but that the Congress overrides the state’s disapproval. As shown in the figure, Nevada has 60 days—until April 16—to disapprove the site, and if disapproved, the Congress has 90 days (of continuous session) in which to enact legislation overriding the state’s disapproval. If the Congress overrides the state’s disapproval and the site designation takes effect, the next step is for the secretary to submit a license application to NRC within 90 days after the site designation is effective. In total, these statutory time frames provide about 150 to 240 days, or about 5 to 8 months, from the time the president makes a recommendation to DOE’s submittal of a license application. On the basis of Bechtel’s September 2001 program reassessment, however, DOE would not be ready to submit a license application to NRC until January 2006. DOE states that it may be able to open a repository at Yucca Mountain in 2010. The department has based this expectation on submitting an acceptable license application to NRC in 2003, receiving NRC’s authorization to construct a repository in 2006, and constructing essential surface and underground facilities by 2010. However, Bechtel, in its September 2001 proposal for reestablishing technical, schedule, and cost baselines for the program, concluded that January 2006 is a more realistic date for submitting a license application. Because of uncertainty over when DOE may be able to open the repository, the department is exploring alternatives that might still permit it to begin accepting commercial spent fuel in 2010. An extension of the license application date to 2006 would almost certainly preclude DOE from achieving its long-standing goal of opening a repository in 2010. According to DOE’s May 2001 report on the program’s estimated cost, after submitting a license application in 2003, DOE estimates that it could receive an authorization to construct the repository in 2006 and complete the construction of enough surface and underground facilities to open the repository in 2010, or 7 years after submitting the license application. This 7-year estimate from submittal of the license application to the initial construction and operation of the repository assumes that NRC would grant an authorization to construct the facility in 3 years, followed by 4 years of construction. Assuming these same estimates of time, submitting a license application in January 2006 would extend the opening date for the repository until about 2013. Furthermore, opening the repository in 2013 may be questionable for several reasons. First, a repository at Yucca Mountain would be a first-of- a-kind facility, meaning that any schedule projections may be optimistic. DOE has deferred its original target date for opening a repository from 1998 to 2003 to 2010. Second, although the Nuclear Waste Policy Act states that NRC has 3 years to decide on a construction license, a fourth year may be added if NRC certifies that it is necessary. Third, the 4-year construction time period that DOE’s current schedule allows may be too short. For example, a contractor hired by DOE to independently review the estimated costs and schedule for the nuclear waste program reported that the 4-year construction period was too optimistic and recommended that the construction phase be extended by a year-and-a-half. Bechtel anticipates a 5-year period of construction between the receipt of a construction authorization from NRC and the opening of the repository. A 4-year licensing period followed by 5 years of initial construction could extend the repository opening until about 2015. Finally, these simple projections do not account for any other factors that could adversely affect this 7- to 9-year schedule for licensing, constructing, and opening the repository. Annual appropriations for the program in recent years have been less than $400 million. In contrast, according to DOE, it needs between $750 million and $1.5 billion in annual appropriations during most of the 7- to 9-year licensing and construction period in order to open the repository on that schedule. In its August 2001 report on alternative means for financing and managing the program, DOE stated that unless the program’s funding is increased, the budget might become the “determining factor” whether DOE will be able to accept wastes in 2010. In part, DOE’s desire to meet the 2010 goal is linked to the court decisions that DOE—under the Nuclear Waste Policy Act and as implemented by DOE’s contracts with owners of commercial spent fuel—is obligated to begin accepting spent fuel from contract holders not later than January 31, 1998, or be held liable for damages. Courts are currently assessing the amount of damages that DOE must pay to holders of spent fuel disposal contracts. Estimates of potential damages for the estimated 12-year delay from 1998 to 2010 range widely from the department’s estimate of about $2 billion to $3 billion to the nuclear industry’s estimate of at least 50 billion. The damage estimates are based, in part, on the expectation that DOE would begin accepting spent fuel from contract holders in 2010. The actual damages could be higher or lower, depending on when DOE begins accepting spent fuel. Because of the uncertainty of achieving the 2010 goal for opening the Yucca Mountain repository, DOE is examining alternative approaches that would permit it to meet the goal. For example, in a May 2001 report, DOE examined approaches that might permit it to begin accepting wastes at the repository site in 2010 while spreading out the construction of repository facilities over a longer time period. The report recommended storing wastes on the surface until the capacity to move wastes into the repository has been increased. Relatively modest-sized initial surface facilities to handle wastes could be expanded later to handle larger volumes of waste. Such an approach, according to the report, would permit partial construction and limited waste emplacement in the repository, at lower than earlier estimated annual costs, in advance of the more costly construction of the facility as originally planned. Also, by implementing a modular approach, DOE would be capable of accepting wastes at the repository earlier than if it constructed the repository described in the documents that the secretary used to support a site recommendation. DOE has also contracted with the National Research Council to provide recommendations on design and operating strategies for developing a geologic repository in stages, which is to include reviewing DOE’s modular approach. The council is addressing such issues as the (1) technical, policy, and societal objectives and risks for developing a staged repository; (2) effects of developing a staged repository on the safety and security of the facility and the effects on the cost and public acceptance of such a facility; and (3) strategies for developing a staged system, including the design, construction, operation, and closing of such a facility. The council expects to publish interim and final reports on the study in late March 2002 and in December 2002, respectively. As of December 2001, DOE expected to submit the application to NRC in 2003. This date reflects a delay in the license application milestone date last approved by DOE in March 1997 that targeted March 2002 for submitting a license application. The 2003 date was not formally approved by DOE’s senior managers or incorporated into the program’s cost and schedule baseline, as required by the management procedures that were in effect for the program. At least three extensions for the license application date have been proposed and used by DOE in program documents, but none of these proposals have been approved as required. As a result, DOE does not have a baseline estimate of the program’s schedule and cost— including the late 2004 date in its fiscal year 2003 budget request—that is based on all the work that it expects to complete through the submission of a license application. DOE’s guidance for managing major programs and projects requires, among other things, that senior managers establish a baseline for managing the program or project. The baseline describes the program’s mission—in this case, the safe disposal of highly radioactive waste in a geologic repository—and the expected technical requirements, schedule, and cost to complete the program. Procedures for controlling changes to an approved baseline are designed to ensure that program managers consider the expected effects of adding, deleting, or modifying technical work, as well as the effects of unanticipated events, such as funding shortfalls, on the project’s mission and baseline. In this way, alternative courses of action can be assessed on the basis of each action’s potential effect on the baseline. DOE’s procedures for managing the nuclear waste program require that program managers revise the baseline, as appropriate, to reflect any significant changes to the program. After March 1997, according to DOE officials, they did not always follow these control procedures to account for proposed changes to the program’s baseline, including the changes proposed to extend the date for license application. According to these same officials, they stopped following the control procedures because the secretary of energy did not approve proposed extensions to the license application milestone. As a result, the official baseline did not accurately reflect the program’s cost and schedule to complete the remaining work necessary to submit a license application. In November 1999, the Yucca Mountain site investigation office proposed extending the license application milestone date by 10 months, from March to December 2002, to compensate for a $57.8 million drop in funding for fiscal year 2000. A proposed extension in the license application milestone required the approval of both the director of the nuclear waste program and the secretary of energy. Neither of these officials approved this proposed change nor was the baseline revised to reflect this change even though the director subsequently began reporting the December 2002 date in quarterly performance reports to the deputy secretary of energy. The site investigation office subsequently proposed two other extensions of the license application milestone, neither of which was approved by the program’s director or the secretary of energy or incorporated into the baseline for the program. Nevertheless, DOE began to use the proposed, but unapproved, milestone dates in both internal and external reports and communications, such as in congressional testimony delivered in May 2001. Because senior managers did not approve these proposed changes for incorporation into the baseline for the program, program managers did not adjust the program’s cost and schedule baseline. By not accounting for these and other changes to the program’s technical work, milestone dates, and estimated costs in the program’s baseline since March 1997, DOE has not had baseline estimates of all of the technical work that it expected to complete through submission of a license application and the estimated schedule and cost to complete this work. This condition includes the cost and schedule information contained in DOE’s budget request for fiscal year 2003. | As required by law, the Department of Energy (DOE) has been investigating a site at Yucca Mountain, Nevada, to determine its suitability for disposing of highly radioactive wastes in a mined geologic repository. If the site is approved, DOE must apply to the Nuclear Regulatory Commission (NRC) for authorization to construct a repository. If the site is not approved for a license application, or if NRC denies a license to construct a repository, the administration and Congress will have to consider other options for the long-term management of existing and future nuclear wastes. DOE is not prepared to submit an acceptable license application to the NRC within the statutory limits that would take effect if the site is approved. DOE is unlikely to achieve its goal of opening a repository at Yucca Mountain by 2010. Sufficient time would not be available for DOE to obtain a license from NRC and construct enough of the repository to open it in 2010. Another key factor is whether DOE will be able to obtain the increases in annual funding that would be required to open the repository by 2010. DOE currently does not have a reliable estimate of when, and at what cost, a license application can be submitted or a repository can be opened because DOE stopped using its cost and schedule baselines to manage the site investigation in 1997. |
Investments in IT can enrich people’s lives and improve organizational performance. For example, during the last two decades the Internet has matured from being a means for academics and scientists to communicate with each other to a national resource where citizens can interact with their government in many ways, such as by receiving services, supplying and obtaining information, asking questions, and providing comments on proposed rules. While these investments have the potential to improve lives and organizations, federally funded IT projects can—and have—become risky, costly, unproductive mistakes. As we have described in numerous reports and testimonies, although a variety of best practice documentation exists to guide their successful acquisition, federal IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission-related outcomes. IT acquisition best practices have been developed by both industry and the federal government. For example, the Software Engineering Institute (SEI) has developed highly regarded and widely used guidance on best practices such as requirements development and management, risk management, configuration management, validation and verification, and project monitoring and control. In the federal government, GAO’s own research in IT management best practices led to the development of the Information Technology Investment Management (ITIM) Framework, which describes essential and complementary IT investment management disciplines, such as oversight of system development and acquisition management, and organizes them into a set of critical processes for successful investments. Congress has also enacted legislation that reflects IT management best practices. For example, the Clinger-Cohen Act of 1996, which was informed by GAO best practice recommendations, requires federal agencies to focus more on the results they have achieved through IT investments, while concurrently improving their IT acquisition processes. Specifically, the act requires agency heads to implement a process to maximize the value of the agency’s IT investments and assess, manage, and evaluate the risks of its IT acquisitions. Further, the act establishes chief information officers (CIO) to advise and assist agency heads in carrying out these responsibilities. The act also requires OMB to encourage agencies to develop and use best practices in IT acquisition. Additionally, the E-Government Act of 2002 established a CIO Council, which is led by the Federal CIO, to be the principal interagency forum for improving agency practices related to the development, acquisition, and management of information resources, including sharing best practices. Consistent with this mandate, the CIO Council established a Management Best Practices Committee in order to serve as a focal point for promoting IT best practices within the federal government. We have often reported on a range of acquisition management weaknesses facing federal IT investments—including problems relating to senior leadership, requirements management, and testing. For example, for the investments described below, we have identified acquisition weaknesses, and have reported on significant cost increases and schedule delays. Additionally, each of these investments was ultimately cancelled or significantly restructured as a result of agency reviews conducted in response to acquisition weaknesses, cost increases, and schedule delays. In June 2009, we reported that an executive committee for the National Polar-orbiting Operational Environmental Satellite System (NPOESS)—a program jointly managed by the Department of Commerce’s National Oceanic and Atmospheric Administration, the Department of Defense, and the National Aeronautics and Space Administration—lacked the membership and leadership needed to effectively and efficiently oversee and direct the program. Specifically, the Defense committee member with acquisition authority did not attend committee meetings and sometimes contradicted the committee’s decisions. Further, the committee did not track its action items to closure, and many of the committee’s decisions did not achieve desired outcomes. To address these issues, we recommended that the Secretary of Defense direct the key committee member to attend and participate in committee meetings. Additionally, we recommended that the heads of the agencies that participate in the committee direct the committee members to track action items to closure, and identify the desired outcomes associated with each of the committee’s actions. Further, we reported that the launch date for an NPOESS demonstration satellite had been delayed by over 5 years and the cost estimate for the program had more than doubled—from $6.5 billion to about $15 billion. In February 2010, a presidential task force decided to disband NPOESS and, instead, have the agencies undertake separate acquisitions. Since 2007, we have reported on a range of acquisition weaknesses facing the Department of Homeland Security’s (DHS) Secure Border Initiative Network—also known as SBInet. For example, in January 2010, we reported that DHS had not effectively managed key aspects of the SBInet testing program such as defining test plans and procedures in accordance with important elements of relevant guidance. In light of these weaknesses, we made recommendations to DHS related to the content, review, and approval of test planning documentation. In May 2010, we reported that the final acceptance of the first two SBInet deployments had slipped from November 2009 and March 2010 to September 2010 and November 2010, respectively, and that the cost-effectiveness of the system had not been justified. We concluded that DHS had not yet demonstrated that the considerable time and money being invested to acquire and develop SBInet was a wise and prudent use of limited resources. The Secretary of Homeland Security ordered a departmentwide assessment of the SBInet program; the Secretary’s decision was motivated in part by continuing delays in the development and deployment of SBInet capabilities and concerns that the SBInet system had not been adequately justified by a quantitative assessment of cost and benefits. Based on the results of the assessment, in January 2011, the DHS Secretary decided to end SBInet as originally conceived. In May 2010, we reported that after spending $127 million over 9 years on an outpatient scheduling system, the Department of Veterans Affairs (VA) has not implemented any of the planned system’s capabilities and is essentially starting over. After determining that the system could not be deployed, the department terminated the contract and ended the program in September 2009. We concluded that the department’s efforts to successfully complete the system had been hindered by weaknesses in several key project management disciplines and a lack of effective oversight that, if not addressed, could undermine the department’s second effort to replace the scheduling system. We recommended that the department take action to improve key processes, including acquisition management, requirements management, system testing, implementation of earned value management, risk management, and program oversight. In June 2011, we reported that end users were not sufficiently involved in defining requirements for the Federal Emergency Management Agency’s (FEMA) National Flood Insurance Program’s insurance policy and claims management system. After conducting an assessment of the program prompted by problems identified in end user testing, FEMA leadership cancelled the system because it failed to meet end user expectations. This decision forced the agency to continue to rely on an outdated system that is neither effective nor efficient. In order to avoid the root causes of this program’s failure, we recommended that for future related modernization attempts, DHS should ensure that key stakeholders are adequately involved in requirements development and management. Additionally, we have previously reported on investments in need of management attention across the federal government. For example, in April 2011, we reported on the visibility into federal IT investments provided by the IT Dashboard—a publicly available website that displays detailed information on federal agencies’ major IT investments, including assessments of actual performance against cost and schedule targets (referred to as ratings) for approximately 800 major federal IT investments. Specifically, we reported that, as of March 2011, the Dashboard provided visibility into over 300 IT investments, totaling almost $20 billion, in need of management attention. We noted that 272 investments with costs totaling $17.7 billion had ratings that indicated the need for attention, and 39 investments with costs totaling $2.0 billion had ratings that indicated significant concerns. OMB plays a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. In June 2009, OMB established the IT Dashboard to improve the transparency into and oversight of agencies’ IT investments. According to OMB officials, agency CIOs are required to update each major investment in the IT Dashboard with a rating based on the CIO’s evaluation of certain aspects of the investment, such as risk management, requirements management, contractor oversight, and human capital. According to OMB, these data are intended to provide a near real-time perspective of the performance of these investments, as well as a historical perspective. Further, the public display of these data is intended to allow OMB, congressional and other oversight bodies, and the general public to hold government agencies accountable for results and progress. In January 2010, the Federal CIO began leading TechStat sessions— reviews of selected IT investments between OMB and agency leadership—to increase accountability and transparency and improve performance. OMB has identified factors that may result in an investment being selected for a TechStat session, such as—but not limited to— evidence of (1) poor performance; (2) duplication with other systems or projects; (3) unmitigated risks; and (4) misalignment with policies and best practices. OMB officials stated that as of June 30, 2011, 63 TechStat sessions had been held with federal agencies. According to OMB, these sessions enabled the government to improve or terminate IT investments that were experiencing performance problems. For example, in June 2010, the Federal CIO led a TechStat on the National Archives and Records Administration’s (NARA) Electronic Records Archives investment that resulted in six corrective actions, including halting fiscal year 2012 development funding pending the completion of a strategic plan. Similarly, in January 2011, we reported that NARA had not been positioned to identify potential cost and schedule problems early, and had not been able to take timely actions to correct problems, delays, and cost increases on this system acquisition program. Moreover, we estimated that the program would likely overrun costs by between $205 and $405 million if the agency completed the program as originally designed. We made multiple recommendations to the Archivist of the United States, including establishing a comprehensive plan for all remaining work, improving the accuracy of key performance reports, and engaging executive leadership in correcting negative performance trends. Drawing on the visibility into federal IT investments provided by the IT Dashboard and TechStat sessions, in December 2010, OMB issued a plan to reform IT management throughout the federal government over an 18-month time frame. The plan contains two high-level objectives: achieving operational efficiency, and effectively managing large-scale IT programs. To achieve these high-level objectives, the plan outlines 25 action items. According to OMB officials, they have taken several actions pursuant to this plan. For example, pursuant to Action Item Number 10—development of an IT best practices collaboration platform—in April 2011 the CIO Council launched an IT best practices collaboration website. According to OMB, this portal provides federal program managers with access to a searchable database of program management best practices in order to promote interagency collaboration and real-time problem solving related to IT programs. The portal contains links to case studies by federal agencies demonstrating the use of best practices in managing large-scale IT systems. For example, a recent case study posted by the Social Security Administration outlined efforts to develop a cadre of highly skilled, trained, and qualified program managers to promote the success of its investments. According to federal department officials, the following seven investments best achieved their respective cost, schedule, scope, and performance goals. The estimated total life-cycle cost of the seven investments is about $5 billion. Six of the seven investments are currently operational. The following provides descriptions of each of the seven investments. The U.S. Census Bureau is the primary source of basic statistics about the population and economy of the nation and is best known for the decennial census of population and housing. The most recent decennial census was conducted in 2010. Between March and August 2010, the Census Bureau provided assistance to respondents and captured their response data via paper and telephone agent to allow sufficient time for post-capture processing, review, and tabulation. The Decennial Response Integration System (DRIS) provided a system for collecting and integrating census responses from forms and telephone interviews. Specifically, DRIS integrated the following three primary functions: Paper data capture: Processed paper census questionnaires sent by mail from respondents. The system sorted the questionnaires and captured data from them, which were turned into electronic data. Telephone questionnaire assistance: Provided respondents with assistance in understanding their questionnaire, and captured responses for forms completed over the phone. This function utilized interactive voice response as the initial contact mechanism with an option to speak with call center representatives if needed. Coverage follow up: Contacted a sample of respondents by telephone to determine if changes should be made to their household roster as reported on their initial census return with the goal of ensuring that every person in the United States is counted once and in the right place. To help carry out the 2010 Decennial Census, the government engaged a contractor to design, build, test, deploy, implement, operate, and maintain the systems, infrastructure, staffing, procedures, and facilities needed for DRIS. The DRIS contract was divided into three primary phases. Phase 1 included the development, testing, deployment, implementation, and support of the DRIS components needed for a 2008 Census Dress Rehearsal. Phase 2 included the nationwide deployment of the DRIS components and full-scale production operations of the paper data capture, telephone questionnaire assistance, and coverage follow up functions for the 2010 Census. Phase 3 is to address post-2010 Census DRIS component disposition and data archiving. Phase 3 was to be completed in September 2011. For purposes of our report, we focused only on the first two phases of DRIS because the DRIS system was being acquired during these phases. In October 2009, we reported that DRIS fully implemented the key practices necessary for a sound implementation of earned value management—a project management approach that, if implemented appropriately, provides objective reports of project status, produces early warning signs of impending schedule delays and cost overruns, and provides unbiased estimates of anticipated costs at completion. Additionally, we reported that, as of May 2009, the DRIS contractor was experiencing a cumulative cost underrun and was ahead of schedule; however, the life-cycle cost estimate for DRIS had increased from $574 million to $946 million. This cost increase was mostly due to increases in both paper and telephone workloads. For example, the paper workload increased due to an April 2008 redesign of the 2010 Census that reverted planned automated operations to paper-based processes and required DRIS to process an additional estimated 40 million paper forms. Number of users: 20-30 joint warfighter logistician users; 13,000 single-sign-on users Operations start date: March 2009 (deployment of initial operational capability for Increment 7) The Global Combat Support System-Joint (GCSS-J) Increment 7 is a system that supports military logistics operations that provide military personnel with the supplies and information they need to accomplish their missions. GCSS- J combines data, such as the location and quantity of a particular resource, from multiple authoritative data sources (e.g., Asset Visibility, Joint Operation Planning and Execution System, and Global Decision Support System) and analyzes the data to provide information needed by logistics decision makers. The end users of the system are the logisticians at the various Combatant Commands, which are made up of representatives from multiple branches, each having a geographical or functional responsibility. According to Defense Information Systems Agency (DISA) officials, the analyses generated by the system enable the commanders of the Combatant Commands to rapidly make critical decisions, and to plan, execute, and control logistics operations. Additionally, the system provides other end users with single sign-on access to the individual data sources. The diverse end user group, combined with a wide spectrum of data, provides a unified supply chain for the Army, Navy, Air Force, and Marine forces in a given area, which is to help eliminate inefficiencies and provide a more useful view into the supply chain. DISA started GCSS-J in 1997 as a prototype. The system is being developed incrementally using Agile software development— specifically, the Scrum methodology. DISA is currently developing and deploying major releases for Increment 7. A total of five major releases were planned within Increment 7; Releases 7.0 and 7.1, which were implemented in March 2009 and December 2009 respectively, were the subject of our review. To date, according to DISA, Increment 7 releases have improved performance and provided new capabilities and enhancements to existing capabilities. For example, the system provides real-time information about road conditions, construction, incidents, and weather to facilitate rapid deployment of military assets. The Manufacturing Operations Management (MOMentum) Project aims to replace a suite of aging mission-essential shop floor, manufacturing control systems at the Y-12 National Security Complex that support the National Nuclear Security Administration’s (NNSA) Stockpile Stewardship and Management Program. The shop floor at the Y-12 complex is responsible for the construction, restoration, and dismantling of nuclear weapon components. The core software currently used in the shop floor manufacturing control systems was deployed in the mid-1980s and will no longer be supported by the vendor on its current hardware platform beginning in 2012. The MOMentum Project has two phases. Phase 1, which was the subject of our review, was implemented in September 2010, and is a deployment of the Production Planning module of SAP for manufacturing planning and scheduling. Phase 2 is to include the deployment of the Manufacturing Execution module of SAP software and support the execution of production schedules on the shop floor. Phase 2 is scheduled to be completed in September 2013. The implementation of the system is expected to save $6 million annually, reduce cycle times for manufacturing, remove dependencies on obsolete technology and unsupported software, and reduce administrative errors and product deviations, among other things. Operations start date: September 2008 (initial operating capability), June 2009 (full operating capability) To facilitate inspections at the nation’s 330 air, sea, and land ports of entry, the Western Hemisphere Travel Initiative (WHTI) requires all citizens of the United States and citizens of Canada, Mexico, and Bermuda traveling to the United States as nonimmigrant visitors to have a passport or other accepted document that establishes the bearer’s identity and citizenship to enter the country from within the Western Hemisphere. In order to implement WHTI at the land border while limiting its impact on the public, U.S. Customs and Border Protection (CBP) engaged a contractor to procure and deploy technology—including Radio Frequency Identification, License Plate Reader, and Vehicle Primary Client technologies. These technologies help to provide CBP officers with law enforcement and border crossing history information for each traveler and vehicle. Initial operating capability was achieved in September 2008 when these technologies were deployed to two ports of entry. Full operating capability was achieved in June 2009 when the WHTI technology was deployed to 37 additional ports of entry. The 39 total ports of entry are high-volume land ports with 354 traffic lanes supporting 95 percent of land border traffic. After reaching full operating capability, the program’s scope was expanded to include deployment of technology and processes to outbound operations, inbound pedestrian processing, and border patrol checkpoint processing. For purposes of our report, we focused on the program’s efforts to achieve full operating capability at 39 land ports of entry. In October 2009, we reported that WHTI fully met 6 of the 11 key practices for implementing earned value management and partially met the remaining 5 practices. Practices not fully met included, for example, a master schedule with activities that were out of sequence or lacked dependencies. Nevertheless, we reported that according to program officials, the WHTI contract was completed on time and on budget. We recommended that the department modify its earned value management policies to be consistent with best practices, implement earned value management practices that address identified weaknesses, and manage negative earned value trends. Additionally, in June 2010, we reported that program officials anticipated total funding shortfalls for the second phase of the program (which is outside of the scope of our review) for fiscal years 2011 through 2015. Further, we reported that schedule delays for a CBP effort to upgrade local and wide area network bandwidth capacity at ports of entry could jeopardize program performance, particularly in terms of response times. Nonetheless, we noted that actual response times exceeded the expected performance levels from June 2009 to June 2010. We did not make any new recommendations at that time. Operations start date: April 2003 (initial operating capability), August 2010 (full operating capability) Initially operational since April 2003, the Federal Aviation Administration’s (FAA) Integrated Terminal Weather System (ITWS) provides weather information to air traffic controllers and flight support personnel. ITWS receives observation and forecast data from the National Weather Service and combines them with data from FAA terminal sensors and sensors on nearby aircraft to integrate weather hazard information for air traffic controllers, air traffic managers, and airlines. This information is presented to end users in one integrated display. According to FAA, a prototype ITWS solution was deployed to four airports beginning in 1994. Based on those successful prototypes, FAA engaged a contractor in 1997 to design, develop, test, and deploy the ITWS system. The system was deployed to its first site in 2003; deployments to other sites continued through August 2010. According to FAA officials, one main advantage of ITWS is that it can provide a 60-minute forecast that can anticipate short-term weather changes (such as tornadoes, thunderstorms, hail, and severe icing) that could result in airplane delays or diversions to other airports, which affect the capacities at the airports. The pre-ITWS system did not have the capability to do this. According to FAA, the implementation of ITWS increases terminal airspace capacity by 25 percent in certain weather conditions and serves to maintain capacity when it would otherwise be lost. The Internal Revenue Service’s (IRS) Business Systems Modernization program, which began in 1999, is a multibillion-dollar, high-risk, highly complex effort that involves the development and delivery of a number of modernized tax administration and internal management systems, as well as core infrastructure projects. These systems are intended to replace the agency’s aging business and tax processing systems, and provide improved and expanded service to taxpayers and internal business efficiencies for IRS. One of the cornerstone projects since the inception of the Business Systems Modernization program has been the Customer Account Data Engine (CADE), which was slated to modernize taxpayer account processing through replacement of the legacy Individual Master File, a 40-year old sequential, flat-file master file processing system for individual taxpayers. In August 2008, IRS began defining a new strategy—referred to as CADE 2—which would build on the progress that the current CADE processing platform had created and leverage lessons learned to date. IRS plans to deliver CADE 2 functionality incrementally through three phases: (1) Transition State 1, (2) Transition State 2, and (3) Target State. Operations start date: January 2012 (estimated completion date for Transition State 1) Transition State 1 consists of the following two projects: Daily processing: This project is to enable IRS to process and post all eligible individual taxpayer returns filed and other transactions by updating and settling individual taxpayer accounts in 24 to 48 hours with current, complete, and authoritative data, and provide employees with timely access. Database implementation: This project is to establish the CADE 2 database, a relational database that will house data on individual taxpayers and their accounts; develop a capability to transfer data from the Individual Master File to the database; and provide for the access of data from the database to downstream IRS financial, customer service, and compliance systems. In April 2011, IRS completed the Transition State 1 detailed design phase, which includes activities such as documenting the physical design of the solution. For purposes of this report, we focused only on the IRS’s efforts on Transition State 1 through the completion of the detailed design phase. In March 2011, we reported that although IRS had taken some positive steps on defining benefits, estimating costs, and managing risks for CADE 2, it did not fully identify and disclose the CADE 2 costs and benefits. Specifically, we reported that although IRS had identified benefits for the first phase of CADE 2, it had yet to set quantitative targets for 5 of the 20 identified benefits, and had yet to finalize the benefits expected in Transition State 2 or define related quantitative targets; although IRS’s process for developing preliminary life-cycle cost estimates was generally consistent with best practices, the agency did not perform all practices associated with credible cost estimates; the schedule for delivering the initial phase of CADE 2 was ambitious; and IRS’s process for managing the risks associated with CADE 2 was generally consistent with best practices. Our recommendations included (1) identifying all of the benefits associated with CADE 2, setting the related targets, and identifying how systems and business processes might be affected, and (2) improving the credibility of revised cost estimates. During the development of the National Flu Plan, which was released in 2006, the White House Homeland Security Council directed VA to develop an employee health tracking and management system. According to VA officials, the need for this system became urgent due to the threat of pandemic influenza in 2007. As a result, the Veterans Health Administration (VHA), working with VA’s Office of Information and Technology, developed the Occupational Health Record-keeping System (OHRS). According to VA officials, OHRS was divided into two increments. The first increment consisted of a minimum feature set which represented the functionality that would provide the agency with the largest return on investment. The first increment became operational in September 2009. The second increment was intended to add functionality to the minimum feature set and to address any remaining requirements. For purposes of our report we focused on the first increment—VA’s efforts to acquire the minimum feature set. OHRS was developed using Agile software development—specifically, the Scrum methodology. OHRS serves as the electronic health record system specifically for VA employees. OHRS provides the end users (i.e., VHA employees who work in occupational health offices at VHA healthcare facilities) the ability to collect and monitor clinical data on its employees (e.g., specific immunizations and medical training) and generate reports. Additionally, a VA official stated that OHRS allows physicians to document a number of health issues related to the workforce, including training and infectious disease management. Among other things, the information in this system is used to allocate staff to appropriate patient care assignments. For example, the system can identify whether a provider has received a vaccine for a certain illness and is therefore able to treat a patient with that illness. Nine factors were identified as critical to the success of three or more of the seven IT investments. The factors most commonly identified include active engagement of stakeholders, program staff with the necessary knowledge and skills, and senior department and agency executive support for the program. These nine critical success factors are consistent with leading industry practices for IT acquisitions. Table 2 shows the nine factors, and examples of how agencies implemented them are discussed below. Officials from all seven selected investments cited active engagement with program stakeholders—individuals or groups (including, in some cases, end users) with an interest in the success of the acquisition—as a critical factor to the success of those investments. Agency officials stated that stakeholders, among other things, reviewed contractor proposals during the procurement process, regularly attended program management office sponsored meetings, were working members of integrated project teams, and were notified of problems and concerns as soon as possible. For example: Census officials stated that the DRIS stakeholders were members of the integrated project team. Their responsibilities as members of the team included involvement in requirements development, participation in peer reviews of contractual deliverables, and review of contractor proposals. IRS officials told us that consistent and open communication with internal and external stakeholders has been critical to the success of CADE 2. For example, IRS officials told us that they regularly report progress made on CADE 2, as well as risk information on the program to oversight bodies, IRS executives, and IRS internal stakeholders. In addition, officials from two investments noted that actively engaging with stakeholders created transparency and trust, and increased the support from the stakeholders. For example, NNSA officials noted that notifying MOMentum stakeholders of potential issues as soon as they were identified helped to foster transparency and trust; this included getting stakeholders’ approval to use a cost- and schedule-tracking approach that was not the agency’s policy, but which ultimately saved the program money and time. Additionally, CBP officials noted that communication with the WHTI stakeholders was greatly enhanced by the use of a consistent message that described, for example, the goals of the program, deployment plans, privacy implications of the Radio Frequency Identification infrastructure, and impact of the program on select groups crossing the border, including U.S. and Canadian children and Native Americans. CBP officials stated that this standardization created a consistent, unified vision and ensured that the message stayed on course. Consistent with this factor, relevant guidance calls for programs to coordinate and collaborate with stakeholders in order to address their concerns and ensure that they fulfill their commitments. Active engagement with stakeholders increases the likelihood that the program will not encounter problems resulting from unresolved stakeholder issues. Officials from six of the seven selected investments indicated that the knowledge and skills of the program staff were critical to the success of the program. This included knowledge of acquisitions and procurement processes, monitoring of contracts, large-scale organizational transformation, Agile software development concepts, and areas of program management such as earned value management and technical monitoring. For example: IRS officials stated that the Treasury Secretary utilized his critical position pay authority to hire executives for CADE 2 who had demonstrated success in managing large-scale transformation efforts in accordance with best practices. Specifically, IRS officials stated that the CADE 2 program manager was previously responsible for the design, development, and implementation of several major global information technology solutions for a major corporation. CBP officials explained that a factor critical to the success of the acquisition was that almost every member of the team working on WHTI had a good understanding of acquisitions—some even held acquisition certifications—in addition to their understanding of program management. According to those officials, these skills contributed to effective program oversight of the WHTI contractors through all phases of the acquisition, not just during contract award. Additionally, officials from three of the seven investments also cited the use of subject matter experts’ knowledge in their cognizant areas as a contributing factor to their programs’ successes. For example, VA officials stated that the OHRS program relied extensively on the subject matter experts’ occupational health experience—treating them as part of the development team and including them in decision making. Two investments in our sample even went one step further—by selecting the program manager from the end user organization as opposed to an individual with an IT background. For example, NNSA officials stated that they used a project manager from the end user organization as opposed to an individual from the department’s information technology office. This individual had decades of experience managing shop floor control systems. As a result, he was well aware of how the work on the shop floor is done and focused on safely delivering the necessary functional requirements to the end user. Leading guidance also recognizes that programs should ensure that program staffs acquire the knowledge and skills needed to perform the project. Individuals who have developed the knowledge and skills needed for the programs are more likely to perform their roles effectively and efficiently. Officials from six of the seven selected investments identified support from senior department and agency executives as critical to the success of their programs. According to those officials, these senior leaders supported the success of these programs in various ways, such as by procuring funding, providing necessary information at critical times, intervening when there were difficulties working with another department, defining a vision for the program, and ensuring that end users participated in the development of the system. For example: The WHTI program manager told us that the former DHS Deputy Secretary reached out to another department in order to finalize a memorandum of understanding that would be used to share information on passports and passcards needed for WHTI. According to the WHTI program manager, prior to the Deputy Secretary’s involvement, the other department’s efforts to collaborate on this issue were not meeting the schedule requirements of the WHTI program. That official told us that after receiving the necessary support from the other department, CBP was able to more rapidly query that department’s data. IRS officials explained that endorsement for CADE 2 has come from the highest levels of the organization. In particular, those officials told us that the IRS Commissioner has made CADE 2 one of his top priorities. IRS officials told us that the Commissioner, through, for example, his keynote speech at a CADE 2 town hall meeting for IRS employees, has provided a clear and unwavering message about CADE 2. This speech and other activities have unified IRS employees, driven change, and removed barriers that can often impede programs of this magnitude. In our experience, strong leadership support can result in benefits to a program, including providing the program manager with the resources necessary to make knowledge-based, disciplined decisions that increase the likelihood of their program’s success. Officials from five of seven selected investments identified the involvement of stakeholders—including end users—in the requirements development process as a factor that was critical to the success of their programs. For example: Census officials told us that the DRIS program management office collaborated extensively with the stakeholders and the contractor to develop requirements. For example, program management office personnel, contractor staff, and the stakeholders all worked together to analyze the requirements in order to ensure they were understood, unique, and verifiable. VA officials told us that an OHRS end user identified a set of requirements for an occupational health system 3 years prior to the initiation of OHRS development efforts. Those officials told us that the developers worked closely with the OHRS end user representative to ensure that those requirements were still valid once the program was initiated, given the length of time since the requirements were initially identified. Relevant industry guidance recognizes the importance of eliciting end user needs and involving stakeholders in requirements development. When stakeholders and end users communicate their requirements throughout the project life cycle, the resulting system is more likely to perform as intended in the end user’s environment. Officials from five of the seven selected investments identified having the end users test and validate the system components prior to formal end user acceptance testing for deployment as critical to the success of their program. For example: DISA officials told us they used a virtual site to connect developers and end users in online testing of evolving software repeatedly during the development of GCSS-J. Using the tool, the developers were able to record the sessions, which was helpful in addressing defects identified during testing. CBP created a fully functional test lab facility for the WHTI program at a mock port of entry test facility constructed at an old private airport in Virginia. Using this facility, they were able to test the software that was being developed and the hardware that was being proposed. Additionally, a core end user group was established and brought to the facility multiple times a year during the acquisition to test the forthcoming technology. Similar to this factor, leading guidance recommends testing selected products and product components throughout the program life cycle. Testing of functionality by end users prior to acceptance demonstrates, earlier rather than later in the program life cycle, that the functionality will fulfill its intended use. If problems are found during this testing, programs are typically positioned to make changes that are less costly and disruptive than ones made later in the life cycle would be. Officials from four of the seven selected investments stated that government and contractor organizations’ personnel were consistent and stable. For example: DISA officials indicated that the longevity of the program management office and contractor staffs has been a contributing factor to GCSS-J’s success. For example, the longevity of the staff contributed to them becoming subject matter experts in their areas of responsibility. CBP officials explained that key program management office staff remained consistent throughout the WHTI program. In addition, according to a CBP official, the staffs genuinely liked to work with one another and were able to collaborate effectively. This factor is consistent with relevant guidance that espouses the importance of having adequate and skilled resources. In particular, having consistent and stable staff can allow teams to keep pace with their workload, make decisions, and have the necessary accountability. Officials from four of the seven selected investments cited the prioritization of requirements as enabling the efficient and effective development of system functionality. For example: FAA officials told us that ITWS end users presented the development team with a “wish list” of requirements that would help them significantly. Those officials told us that end users and developers prioritized those requirements by balancing importance to the end users with the maturity of the technology. FAA officials stated that prototypes of these new requirements were developed and evaluated by end users in the field and were ultimately implemented in the initial operating capability for ITWS. DISA officials explained that during development, GCSS-J end user representatives met with the GCSS-J program office and the GCSS-J developer twice a week for between a full and a half day in order to identify and prioritize requirements. Those officials explained that this frequent interaction was necessary because of the short development iterations (4 to 5 weeks), at the end of which useable functionality was presented to the end users for review. The frequent prioritization ensured that the functionality most critical to the end user representative was developed, and could be deployed sooner than functionality of less importance. Consistent with leading guidance, having prioritized requirements guides the programs in determining the system’s scope and ensures that the functionality and quality requirements most critical to the end users are deployed before less-desired requirements. Officials from four of the seven selected investments indicated that regular communication between the program management office and the prime contractor was critical to the success of the program. This communication was proactive in that there were regularly scheduled meetings between the program management office and the prime contractor, with an expectation of full and honest disclosure of problems. For example: Census officials stated that the DRIS program management office took a proactive, “no surprises” approach to communicating with the contractor. For example, on a monthly basis, the program management office formally documented the technical performance of the contractor based on the relevant elements of the work breakdown structure and the award fee plan. These reports were provided to the contractor, who in turn used the feedback to improve its technical performance. In addition, DRIS program managers and their contractor counterparts met weekly to discuss significant issues. DRIS officials emphasized that the expectation of open communication and trust from senior leadership fostered an environment where issues could be freely discussed with the contractor. CBP officials stated that during the deployment of the WHTI technology to the ports of entry, the program management office held daily conference calls with the contractor to ensure proper coordination and the rapid resolution of problems. For example, during deployment to one port of entry it was determined that the electric system that provided power to the lanes was not adequate. This problem was quickly identified, responsibility for resolving it was assigned, and the issue was quickly resolved. Additionally, Census and VA officials stated that ensuring a positive, non- adversarial relationship between the prime contractor and the program management office was critical to the success of the investment. Census officials noted that both the government and the contractor staff recognized that the only way for the program to succeed was for both parties to succeed. Consistent with this factor, leading guidance recognizes the importance of communication between program officials and the contractor organizations. Implementation of this critical success factor enables programs to ensure that requirements are understood and risks and issues are identified and addressed earlier rather than later in the process, thereby increasing the likelihood that the delivered system will meet its intended purpose and resulting in less costly and less disruptive changes and work efforts. Officials from three of the seven selected investments explained that sufficient funding for the programs contributed to the success of those investments. Officials from two of the investments attributed funding to strong congressional support; in a third case, officials cited strong leadership from senior agency and program officials as being a factor. For example: The WHTI program manager stated that the WHTI program received the requested funding from Congress for the 2 years leading up to the June 1, 2009, mandated implementation date. Additionally, that official told us that Congress provided 2-year money, that is, money that could be obligated over a period of 2 years. Officials told us that the 2- year money gave the program great flexibility to accommodate the inherent complexities and expenditures incurred in a multiyear deployment, and to adapt to inevitable modifications in deployment requirements (that is, additional sites, lanes, and functionality). IRS officials told us that the IRS Commissioner helped the CADE 2 program obtain funding. For example, those officials told us that the IRS Commissioner spoke with congressional representatives frequently in order to sustain interest and support for CADE 2. Relevant guidance recognizes the importance of sufficiently funding IT investments. Investments that receive funding commensurate with their requirements are better positioned to ensure the availability of needed resources, and therefore, deliver the investment within established goals. The nine commonly identified critical success factors are consistent with OMB’s 25-point plan to improve IT management and oversight. In particular, one high-level objective of the plan—effectively managing large-scale IT programs—aims to improve areas that impact the success rates of large IT programs across the federal government. As part of this high-level objective, the plan addresses the importance of ensuring that program management professionals have extensive experience and training, defining requirements by engaging with stakeholders, and providing senior executives with visibility into the health of their IT programs. These principles of effective IT management are reflected in the commonly identified critical success factors. For example, as previously mentioned, six of the seven agencies identified the knowledge and skills of program staff and five of seven agencies cited the involvement of end users and stakeholders in the development of requirements as critical to the success of their IT investments. While our analysis of critical success factors identified by agencies resulted in nine commonly identified factors, agencies also identified additional factors as contributing to the success of their investments. For example: Agile software development: DISA officials stated that the use of Agile software development was critical to the success of the program. Among other things, Agile enhanced the participation of the end users in the development process and provided for capabilities to be deployed in shorter periods of time. Streamlined and targeted governance: IRS officials told us that in comparison to other IRS business systems modernization projects, the governance model for CADE 2 has been streamlined. For example, those officials stated that the CADE 2 governance structure includes an executive steering committee that, in contrast to other programs at IRS that utilize an executive steering committee, is dedicated solely to the CADE 2 program. IRS officials told us that this gives an added measure of accountability and responsibility for the successful outcome of the program. Continuous risk management: VA officials stated that the risk management strategy that the program used was critical to its success. According to the VA officials, risks were identified at daily team meetings and mitigation strategies were developed. Furthermore an official explained that risk management is built in the Agile software development process by, for example, involving the end user early and often to ensure that the requirements were as thoroughly vetted as possible. Several of these factors are also consistent with best practices, such as the critical factors relating to risk management and governance. The full list of critical success factors and how agencies implemented them are presented in appendix II. Although the critical success factors identified by the seven agencies were cited as practices that contributed to the success of their acquisitions, implementation of these factors will not necessarily ensure that federal agencies will successfully acquire IT systems because many different factors contribute to successful acquisitions. Nevertheless, the examples of how agencies implemented the critical success factors may help federal agencies address the well-documented acquisition challenges they face. Moreover, the critical success factors in this report also support OMB’s objective of improving the management of large- scale IT acquisitions across the federal government, and wide dissemination of these factors and how agencies implemented them could complement these efforts. We received written, e-mail, or verbal responses on a draft of this report from all seven departments in our review as well as OMB. These responses are summarized below. The Acting Secretary for the Department of Commerce provided written comments. The department stated that the report provides a good overview and assessment of governmentwide critical factors and elements that led to the successful acquisition of IT investments. The department also provided technical comments, which we incorporated as appropriate. An acquisition analyst from the Department of Defense CIO Acquisition Directorate, writing on behalf of the department, provided an e-mail, which stated that the department had no comments on the draft report. The Director of the NNSA’s Office of Internal Controls, responding on behalf of the Department of Energy, provided an e-mail stating that they agreed with the report and had no further comments. They also noted that the department is committed to supporting OMB’s objective of improving the management of large-scale IT acquisitions, and that wide dissemination of the factors in our report could complement OMB’s efforts. The Director of DHS’s Departmental GAO/Office of Inspector General Liaison Office provided written comments. In its comments, the department noted that it remains committed to continuing its work with OMB to improve the oversight and management of IT investments to help ensure that systems are acquired on time and within budget, and that they deliver the expected benefits and functionality. The department further stated that it will use this report to enhance and improve the factors critical to the successful acquisition of the department’s investments, such as creating a structured training program to assist in obtaining certification in the program management career field, and conducting reviews to provide insight into the cost, schedule, and performance of IT investments. The department also provided technical comments, which we incorporated as appropriate. The Deputy Director of Audit Relations within the Department of Transportation’s Office of the Secretary provided an e-mail with technical comments, which we incorporated as appropriate. A program analyst within the Office of the Chief Information Officer for the Department of the Treasury, writing on behalf of the department, provided an e-mail, which stated that the department had no comments on the draft report. The Department of Veterans Affairs Chief of Staff provided written technical comments, which we incorporated as appropriate. A policy analyst from OMB’s Office of E-Government and Information Technology, speaking on behalf of OMB, provided verbal technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees; the Director of OMB; the secretaries and agency heads of the departments and agencies addressed in this report; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) identify federal information technology (IT) investments that were or are being successfully acquired and (2) identify the critical factors that led to the successful acquisition of these investments. To address our first objective, we selected 10 departments with the largest planned IT budgets as reported in the Office of Management and Budget’s (OMB) fiscal year 2011 Exhibit 53. Collectively, these departments accounted for 88 percent of the federal government’s requested total IT budget for fiscal year 2011. We then asked the chief information officers (CIO) and other acquisition and procurement officials from the departments to select one major, mission-critical IT investment that was, preferably, operational and that best achieved its cost, schedule, scope, and performance goals. Seven departments—the Departments of Defense, Commerce, Energy, Homeland Security, Transportation, the Treasury, and Veterans Affairs—identified successful IT investments. Collectively, these departments accounted for 73 percent of the planned IT spending for fiscal year 2011. To address our second objective, we interviewed officials responsible for each investment, asking them to identify and describe the critical factors that led to their success, and to provide examples where possible. We validated our understanding of the factors and examples collected during the interviews by providing written summaries to agency officials to ensure that their information was accurately portrayed. Because of the open-ended nature of our discussions with officials, we conducted a content analysis of the information we received in order to identify common critical success factors. We then totaled the number of times each factor was mentioned by department and agency officials, choos to report on the critical success factors that were identified by three or more investments. This resulted in our list of nine commonly iden tified critical succes s factors. We then compared these nine critical success factors to leading industry practices on IT acquisitions, such as the Software Engineering Institute’s (SEI) Capability Maturity Model® Integration (CMMI®) for Acquisition, the Project Management Institute’s A ’s Guide to the Project Management Body of Knowledge, and GAO Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity. Finally, we compared the nine commonly identified critical success factors to OMB’s 25 Point Implementation Plan to Reform Federal Information Technology Management in order to determine whether those critical succes are related to the high-level objectives found in the plan. The following seven tables provide a description of critical success factors identified by officials with each of the investments in our sample. dition to the contact named above, Deborah A. Davis (Assistant tor), Kaelin P. Kuhn, Lee McCracken, Thomas E. Murphy, Jamelyn n, and Andrew Stavisky made key contributions to this report. | Planned federal information technology (IT) spending has now risen to at least $81 billion for fiscal year 2012. As GAO has previously reported, although a variety of best practices exists to guide their successful acquisition, federal IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission-related outcomes. Recognizing these problems, the Office of Management and Budget (OMB) has launched several initiatives to improve the oversight and management of IT investments. GAO was asked to identify (1) federal IT investments that were or are being successfully acquired and (2) the critical factors that led to the successful acquisition of these investments. To do this, GAO interviewed agency officials from selected federal departments responsible for each investment. In commenting on a draft of GAO's report, three departments generally agreed with the report. OMB and the other departments either provided minor technical comments, or stated that they had no comments at all. According to federal department officials, the following seven investments were successfully acquired in that they best achieved their respective cost, schedule, scope, and performance goals: (1) Department of Commerce's Decennial Response Integration System; (2) Department of Defense's Global Combat Support System-Joint, Increment 7; (3) Department of Energy's Manufacturing Operations Management (MOMentum) Project; (4) Department of Homeland Security's Western Hemisphere Travel Initiative; (5) Department of Transportation's Integrated Terminal Weather System; (6) Department of the Treasury's Customer Account Data Engine 2 (CADE 2); and (7) Department of Veterans Affairs' Occupational Health Record-keeping System. Department officials identified nine common factors that were critical to the success of three or more of the seven investments: (1) Program officials were actively engaged with stakeholders; (2) Program staff had the necessary knowledge and skills; (3) Senior department and agency executives supported the programs; (4) End users and stakeholders were involved in the development of requirements; (5) End users participated in testing of system functionality prior to formal end user acceptance testing; (6) Government and contractor staff were stable and consistent; (7) Program staff prioritized requirements; (8) Program officials maintained regular communication with the prime contractor; and (9) Programs received sufficient funding. Officials from all seven investments cited active engagement with program stakeholders as a critical factor to the success of those investments. Agency officials stated that stakeholders regularly attended program management office sponsored meetings; were working members of integrated project teams; and were notified of problems and concerns as soon as possible. Implementation of these critical factors will not necessarily ensure that federal agencies will successfully acquire IT systems because many different factors contribute to successful acquisitions. Nonetheless, these critical factors support OMB's objective of improving the management of large-scale IT acquisitions across the federal government, and wide dissemination of these factors could complement OMB's efforts. |
MDA’s mission is to develop and field an integrated and layered Ballistic Missile Defense System to defend the United States, its deployed forces, allies, and friends against all ranges of enemy ballistic missiles in all phases of flight. This is challenging, requiring a complex combination of defensive components—space-based sensors, surveillance and tracking radars, advanced interceptors, command and control, and reliable communications—that work together as an integrated system. A typical hit-to-kill engagement scenario for an intercontinental ballistic missile (ICBM) would unfold as follows: Infrared sensors aboard early-warning satellites detect the hot plume of a missile launch and alert the command authority of a possible attack. Upon receiving the alert, land- or sea-based radars are directed to track the various objects released from the missile and, if so designed, to identify the warhead from among spent rocket motors, decoys, and debris. When the trajectory of the missile’s warhead has been adequately established, an interceptor—consisting of a “kill vehicle” mounted atop a booster—is launched to engage the threat. The interceptor boosts itself toward a predicted intercept point and releases the kill vehicle. The kill vehicle uses its onboard sensors and divert thrusters to detect, identify, and steer itself into the warhead. With a combined closing speed on the order of 10 kilometers per second (22,000 miles per hour), the warhead is destroyed through a “hit-to-kill” collision with the kill vehicle above the atmosphere. To develop a system capable of carrying out such an engagement, MDA is executing an evolutionary acquisition strategy in which the fielding of missile defense capabilities is organized in 2-year increments known as blocks. Each block is intended to provide the Ballistic Missile Defense System with capabilities that will enhance the development and overall performance of the system. The first block—Block 2004—ended on December 31, 2005, and fielded a limited initial capability that included early versions of the Ground-based Midcourse Defense; Aegis Ballistic Missile Defense; Patriot Advanced Capability-3; and the Command, Control, Battle Management, and Communications element. During calendar year 2006 and 2007, MDA is focusing its program of work on the enhancement of four fielded BMDS elements—GMD, Aegis BMD, Sensors, and C2BMC. The primary contribution of Block 2006 is that it fields additional assets and continues the evolution of Block 2004 by providing improved GMD interceptors, enhanced Aegis BMD missiles, upgraded Aegis BMD ships, a Forward-Based X-Band—Transportable radar, and enhancements to the C2BMC software. MDA divides each year’s budget request into a request for the current block and requests for future blocks that have not yet formally begun. For example, in fiscal year 2006, MDA requested funds for Block 2006 and for blocks that begin in 2008, 2010, 2012, and 2014. When MDA submitted its Block 2006 budget to Congress in February 2005, the agency requested funding for not only the four elements fielding assets during Block 2006, but also for the continued development of three elements—ABL, STSS, and THAAD—that will not field assets for operational use until future blocks. According to MDA officials, these elements—which are primarily developmental elements—were included in the block because the agency believed that during the block time frame the elements offered some emergency capability. MDA also requested fiscal year 2006 funds for two other developmental elements, MKV and KEI. However, MDA did not include funding for these elements in its Block 2006 budget request because they provided no capability during the block. Instead, MDA requested funding for MKV in its fiscal year 2006 request for Advanced Component Development and Prototypes—a program element that is not tied to any block—and for KEI in the agency’s fiscal year 2006 request for Block 2014. Table 1 provides a brief description of all elements being developed by MDA. As part of MDA’s planning process, the agency defines overarching goals for the development and fielding of the current BMDS block. These goals identify the composition of the block (the elements in development and those planned for fielding), the type and quantity of assets to be fielded, the cost associated with element development and fielding (including operation and sustainment activites), and the performance expected of fielded assets. For example, in March 2005, MDA told Congress that its Block 2006 program of work would include seven elements—ABL, Aegis BMD, C2BMC, GMD, Sensors, STSS, and THAAD. Further, MDA identified the cumulative number of assets that Aegis BMD, C2BMC, GMD, and the Sensors elements would field by the end of the block, and the performance that those assets would deliver in terms of probability of engagement success, the land area from which a ballistic missile launch could be denied, and the land area that could be protected from a ballistic missile launch. Finally, MDA told Congress that it would try to complete all Block 2006 work for $20.458 billion. To enable MDA to meet its overarching goals, each element’s program office establishes its own plan for fielding and/or developmental activities. For example, each program office develops a delivery plan and a test schedule that contributes to MDA’s performance and fielding goals. The programs also work with their prime contractor to plan the block of work so that it can be completed within the program’s share of MDA’s budget. Since 2002, missile defense has been seen as a national priority and has been funded at nearly requested levels. However, DOD’s Program Budget Decision in December 2004 called for MDA to plan for a $5 billion reduction in funding over fiscal years 2006-2011. Future MDA budgets could be affected by cost growth in federal entitlement programs that are likely to decrease discretionary spending and by increased DOD expenditures, such as expenses created by the Iraq conflict. Last year we reported that MDA strayed from the knowledge-based acquisition strategy that allows successful developers to deliver, within budget, a product whose performance has been demonstrated. In doing so, MDA fielded assets before their capability was known and the full cost of the capability was not transparent to decision makers. We noted that it was possible for MDA to return to a knowledge-based approach to development while still fielding capability in blocks, but that corrective action was needed to put all BMDS elements on a knowledge-based approach. That is, instead of concurrently developing, testing and fielding the BMDS, MDA would need to adopt knowledge points at which the program would determine if it was ready to begin new acquisition activities. These knowledge points would be consistent with those called out in DOD’s acquisition system policy. To provide a basis for holding MDA accountable for delivering within estimated resources and to ensure the success of future MDA development efforts, we recommended that the Secretary of Defense implement a knowledge-based acquisition strategy for all the BMDS elements, assess whether the current 2-year block strategy was compatible with the knowledge-based development strategy, and adopt more transparent criteria for reporting each element’s quantities, cost, and performance. DOD has not taken any action on the first two recommendations because it considers MDA’s acquisition strategy as knowledge-based and because MDA’s block strategy is compatible with the strategy MDA is implementing. Neither did DOD agree to take action on our third recommendation to adopt more transparent criteria for identifying and reporting program changes. In its comments, DOD responded that MDA is required by statute to report significant variances in each block’s baseline and that these reports along with quarterly DOD reviews provide an adequate level of program oversight. MDA made progress during fiscal year 2006 in carrying out planned accomplishments for the block elements, but it will not deliver the value originally planned for Block 2006. Costs have increased, while the scope of work has decreased. It is also likely that in addition to fielding fewer assets other Block 2006 work will be deferred to offset growing contractor costs. Actual costs cannot be reconciled with original goals because the goals have been changed, work travels to and from other blocks, and individual program elements do not account for costs consistently. In addition, although element program offices achieved most of their 2006 test objectives, the performance of the BMDS cannot yet be fully assessed because there have been too few flight tests conducted to anchor the models and simulations that predict overall system performance. Several elements continue to experience technical problems which pose questions about the performance of the fielded system and could delay the enhancement of future blocks. Block 2006 costs have increased because of technical problems and greater than expected GMD operations and sustainment costs. In March 2006, shortly after the formal initiation of Block 2006, increasing costs and other events prompted MDA to reduce the quantity of assets it intended to field during the block. Although the agency reduced the scope of Block 2006, most of the elements’ prime contractors reported that work completed during fiscal year 2006 cost more than planned. Consequently, MDA officials told us it is likely that other work planned for Block 2006 will be deferred until Block 2008 to cover fiscal year 2006 overruns. Furthermore, changing goals, inconsistent reporting of costs by the individual elements, and MDA’s past practice of accounting for the cost of deferred work prevents a determination of the actual cost of Block 2006. MDA’s cost goal for Block 2006 has increased by approximately $1 billion. In March 2005, MDA established a goal of $20.458 billion for the development, fielding, and sustainment of all Block 2006 components. However by March 2006, it had grown by about $1 billion. Cost increases were caused by the: addition of previously unknown operations and sustainment requirements, realignment of the GMD program to support a successful return to flight, realignment of the Aegis BMD program to address technical challenges and invest in upgrades to keep pace with the near term threat, and preparations for round-the-clock operation of the BMDS when the system was put on alert. In an effort to keep costs within the goal, MDA shifted THAAD’s future development costs of $1.13 billion to another block. That is, the agency moved the cost associated with THAAD’s development in fiscal years 2006 through 2011—which in March 2005 was considered a Block 2006 cost—to Block 2008. This accounting change accommodated the cost increase. According to MDA’s November 2006 Report to Congress, THAAD costs will be reported as part of Block 2008 costs to better align the agency’s resources with the planned delivery of THAAD fire units in 2008. Tables 2 and 3 compare the Block 2006 cost goal established for the BMDS in March 2005 and March 2006. For the purposes of this report, we have adjusted the March 2005 cost goal to reflect the deletion of future THAAD cost from Block 2006. This enables the revised cost goal that excludes THAAD to be compared with the original cost goal. Had THAAD’s cost been removed from MDA’s March 2005 cost goal, Block 2006 would have actually totaled about $19.3 billion. Comparing this with the March 2006 revised goal of approximately $20.3 billion reveals the $1 billion increase in estimated Block 2006 costs. The 2-year block structure established by MDA has proven to be a complicated concept for its BMDS elements to implement. According to officials, MDA defines its block structure in two types of capabilities: Early Capability–A capability that has completed sufficient testing to provide confidence that the capability will perform as designed. In addition, operator training is complete and logistical support is ready. So far, Aegis BMD, C2BMC, and GMD are the only elements that have met these criteria. Full Capability–These capabilities have completed all system-level testing and have shown that they meet expectations. At this stage, all doctrine, organization, training, material, leadership, personnel, and facilities are in place. According to MDA officials, the early capability is typically fielded during one block and the full capability is usually attained during the next or a subsequent block. However, not all elements account for Block 2006 costs in the same manner. For example, table 4 below shows that some elements included costs that will be incurred to reach full capability—costs that will be recognized in fiscal year 2009 through 2011—while other elements have not. According to agency officials, the cost of all activities needed to validate the performance of Block 2006 Fielded Configuration elements should be included as part of the BMDS Block 2006 costs even though these activities may occur during future blocks. According to officials from MDA’s Systems Engineering and Integration Directorate, the C2BMC and Aegis BMD programs’ cost accounting for Block 2006 are the most accurate because the programs included the costs to conduct follow-on testing in subsequent years. Additionally, the officials said that other elements of the BMDS will conduct similar tests in the years following the actual delivery of their Block 2006 capabilities; however, the costs were not included as Block 2006 costs. If each BMDS element were to consistently report block costs, the planned costs for Block 2006 would be higher than MDA’s current reported costs of $20.34 billion. MDA is making some progress toward achieving its revised Block 2006 goals, but the number of fielded assets and their overall performance will be less than planned when MDA submitted its Block 2006 goals to Congress in March 2005. MDA notified Congress that it was revising its Block 2006 Fielded Configuration Baseline in March 2006, shortly after submitting its fiscal year 2007 budget. When MDA provided Congress with its quantity goals in March 2005, it stated those goals cumulatively. That is, MDA added the number of Block 2004 assets that it planned to field by December 31, 2005, to the number of assets planned for Block 2006. However, in the case of GMD interceptors, MDA was unable to meet its Block 2004 quantity goals, which, in effect, caused MDA’s Block 2006 goal for interceptors to increase. For example, MDA planned to field 18 GMD interceptors by December 31, 2005, and to field an additional 7 interceptors during Block 2006, for a total of 25 interceptors by the end of Block 2006. But, because it did not meet its Block 2004 fielding goal—fielding only 10 of the 18 planned interceptors— MDA could not meet its Block 2006 cumulative goal of 25 without increasing its Block 2006 deliveries. For purposes of this report, we determined the number of assets that MDA would have to produce to meet its Block 2006 cumulative quantity goal. Table 5 depicts only those quantities and shows how they have changed over time. According to MDA, it reduced the number of GMD interceptors in March 2006 for four primary reasons: delays in interceptor deliveries caused by an explosion at a a halt in production after several flight test failures and pending Mission Readiness Task Force (MRTF) reviews, a MRTF review that redirected some interceptors from fielding to testing, and the temporary suspension of fielding interceptors due to manufacturing and quality issues associated with the exoatomospheric kill vehicle (EKV). MDA also delayed a partial upgrade to the Thule early warning radar until a full upgrade can be accomplished. According to a July 11, 2005, DOD memorandum, the full upgrade of Thule is the most economical option and it meets DOD’s desire to retain a single configuration of upgraded early warning radars. Additionally, the delivery of Aegis BMD Standard Missile -3 (SM-3) was reduced as technical challenges associated with the Divert Attitude Control System were addressed and as investments in upgrades were made to keep pace with emerging ballistic missile threats. According to Aegis BMD officials, the program also revised the upgrade schedule for engagement destroyers because other priorities prevent the Navy from making one ship available before the end of the block. Budget cuts to the C2BMC program also caused MDA to defer the installation of C2BMC suites at three sites. MDA had planned to install the suites at U.S. Central Command, European Command, and another site that was to be determined before the end of the block. However, MDA now plans to place less expensive Web browsers at these sites. MDA made progress in fielding additional BMDS assets in 2006 and is generally on track to meet most of its revised block goals. MDA’s delivery schedules and System Element Review reports show that MDA planned to accomplish these goals by making the following progress by December 31, 2006: adding 4 Aegis BMD missiles to inventory, adding 2 new Aegis BMD destroyers for long-range surveillance and tracking, upgrading 2 Aegis BMD destroyers and 2 Aegis BMD cruisers to perform both engagement and long-range surveillance and tracking, adding 1 new Aegis BMD destroyer and 1 new cruiser with both engagement and long-range surveillance and tracking capability, completing a number of activities prior to delivering the FBX-T radar, delivering the hardware for the 3 Web browsers, and emplacing 8 GMD interceptors. With the exception of the GMD interceptors, MDA completed all work as planned. The GMD program was only able to emplace four interceptors by December 2006, rather than the eight planned. However, program officials told us that the contractor has increased the number of shifts that it is working and the program believes that with this change the contractor can accelerate deliveries and emplace as many as 24 interceptors by the end of the block. However, to do so, the GMD program will have to more than double its 2007 interceptor emplacement rate. Even though MDA reduced the quantity of assets it planned to deliver during Block 2006 to free up funds, most of the MDA’s prime contractors overran their fiscal year 2006 budgets. Collectively, the prime contractors developing elements included in Block 2006 exceeded their budgets by approximately $478 million, with GMD accounting for about 72 percent of the overrun. Table 6 contains our analysis of prime contractors’ cost and schedule performance in fiscal year 2006 and the potential overrun or underrun of each contract at completion. All estimates of the contracts’ cost at completion are based on the contractors’ performance through fiscal year 2006. Appendix II provides further details regarding the cost and schedule performance of the prime contractors for the seven elements shown in table 6. As shown in table 6, the Sensors element is the only Block 2006 element that according to our analysis performed within its fiscal year 2006 budget. The ABL, Aegis BMD, GMD, and STSS programs overran their fiscal year budgets as a result of technical problems and integration issues encountered during the year. We could not assess the C2BMC contractor’s cost and schedule performance because MDA suspended Earned Value reporting during the year as the contractor replanned its Block 2006 program of work. In addition to analyzing the fiscal year 2006 cost and schedule performance of elements included in Block 2006, we also analyzed the performance of elements included in other blocks. Of the elements reporting Earned Value data, only KEI performed within its budget. THAAD’s integration problems once again caused it to exceed its budget. We were unable to determine whether the work accomplished by the MKV contractor cost more than originally planned because Contract Performance Reports were suspended in February 2006 as the program transitioned from an advanced technology development program to a system development program. This transition prompted MKV to establish a new baseline for the program, which the contractor will not report against until early in fiscal year 2007. MDA officials told us that MDA is likely to defer some Block 2006 work activities (other than the delivery of assets) into future blocks in an effort to operate within the funds programmed for the block. If the agency reports the cost of deferred work as it has in the past, the cost of Block 2006 will not include all work that benefits the block and the cost of the future block will be overstated. The deferral of work, while necessary to offset increased costs, complicates making a comparison of a block’s actual costs with its original estimate. According to the Statement of Federal Financial Accounting Standards Number 4, a federal program should report the full cost of its outputs, which is defined as the total amount of resources used to produce the output. In March 2006, we reported that the cost of MDA’s Block 2004 program of work was understated because the reported costs for the block did not include the cost of Block 2004 activities that were deferred until Block 2006. Conversely, the cost of Block 2006 is overstated because the deferred activities from Block 2004 do not directly contribute to the output of Block 2006. Similarly, if MDA decides to defer Block 2006 activities until Block 2008 as officials in MDA’s Office of Agency Operations told us is likely, the cost of those activities will likely be captured as part of Block 2008 costs. Most BMDS elements achieved their primary calendar year 2006 test objectives and conducted test activities on schedule. By December 2006– the midpoint of Block 2006–three of the six Block 2006 elements and all elements considered part of future blocks—met their 2006 primary test objectives. Only the ABL, Aegis BMD, and STSS elements were unable to achieve these objectives. Although the elements encountered test delays, some were able to achieve noteworthy accomplishments. For example, in its third flight test, the GMD program exceeded its test objectives by intercepting a target. This intercept was particularly noteworthy because it was the first successful intercept attempt for the program since 2002. Also, although the test was for only one engagement scenario, it was notable because it was GMD’s first end-to-end test. The GMD program originally planned to conduct four major flight tests, during fiscal year 2006, two using operational interceptors. However, the program was only able to conduct three flight tests during the fiscal year. In one, an operational interceptor was launched against a simulated target; in a second test, a simulated target was launched to demonstrate the ability of the Beale radar to provide a weapon system task plan; and in the other, an interceptor was launched against an actual target. It was in the third test that—for one end-to-end scenario—the program exceeded test objectives by destroying a target representative of a real world threat. The objectives of the fourth test were to be similar to those of the third test— an interceptor flying-by a target with no expectation of a hit. However, program officials told us that the success of the earlier tests caused them to accelerate the objectives of the fourth test by making it an intercept attempt. The fourth test has not yet taken place because a delay in the third test caused a similar delay in the fourth test and because components of the test interceptor are being changed to ensure that they will function reliably. This test is currently scheduled no earlier than the third quarter of fiscal year 2007. Both the C2BMC and Sensors elements conducted all planned test activities on schedule and were able to meet their 2006 objectives. The C2BMC software, which enables the system to display real-time target information collected by BMDS sensors, was tested in several flight tests with the Aegis BMD and GMD programs and was generally successful. The Sensors element was also able to complete all tests planned to ensure that the Forward-Based X-Band— Transportable (FBX-T) radar will be ready for operations. The warfighter will determine when the FBX-T will become operational, but MDA officials told us that this may not occur until the United States is able to provide the radar’s data to Japan. MDA was unable to successfully execute the 2006 test objectives for the STSS program. Thermal vacuum testing that was to be conducted after the first payload was integrated with space vehicle 1 was delayed as a result of integration problems. According to program officials, testing began in January 2007 and it was expected to be completed in late February 2007. Although the Aegis BMD program conducted its planned test activities on schedule, it was unable to achieve all of its test objectives for 2006. Since the beginning of Block 2006, the program has conducted one successful intercept, which tested the new Standard Missile-3 design that is being fielded for the first time during Block 2006. This new missile design provides a capability against more difficult threats and has a longer service life than the missile produced in Block 2004. In December 2006, a second intercept attempt failed because the weapon system component was incorrectly configured and did not classify the target as a threat, which prevented the interceptor from launching. Had this test been successful, it would have been the first time that the pulse mode of the missile’s Solid Divert and Attitude Control System would have been partially flight tested. A sixth BMDS element–ABL–experienced delays in its testing schedule and was also unable to achieve its fiscal year 2006 test objectives. ABL is an important element because if it works as desired, it will defeat enemy missiles soon after launch, before decoys are released to confuse other BMDS elements. Development of the element began in 1996, but MDA has not yet demonstrated that all of ABL’s leading-edge technologies will work. The ABL program plans to prove critical technologies during a lethality demonstration. This demonstration is a key knowledge point for ABL because it is the point at which MDA will decide the program’s future. However, technical problems encountered with the element’s Beam Control/Fire Control component caused the program to experience over a 3-month delay in its ground test program, which has delayed the planned lethality demonstration until 2009. In addition, all software problems have not been completely resolved and, according to ABL’s Program Manager, will have to be corrected before flight testing can begin, which could further delay the lethality demonstration. The KEI element also has a key decision point—a booster flight test— within the next few years. In preparation for this test, the program successfully conducted static fire tests and wind tunnel tests in fiscal year 2006 to better assess booster performance. Upon completion of KEI’s 2008 flight test and ABL’s 2009 lethalithy demonstration, MDA will compare the progress of the two programs and decide their futures. In January 2005, MDA established ABL as the primary boost phase defense element. At the same time, MDA restructured the KEI program to develop an upgraded long-range midcourse interceptor and reduced KEI’s role in the boost phase to that of risk mitigation. A KEI official told us that a proposal is being developed that suggests MDA approach the 2009 decision as a down select or source selection that would decide whether ABL or KEI would be the BMDS boost phase capability. The MKV program accomplished all of its planned activities as scheduled during fiscal year 2006, which included several successful propulsion tests. In November 2005, the program tested a preliminary design of MKV’s liquid propellant divert and attitude control system–the steering mechanism for the carrier and kill vehicles. This test was a precursor to a successful July 2006 test of the liquid divert and attitude control system’s divert thruster, which was conducted under more realistic conditions. The program also executed a solid propellant divert and attitude control system test in December 2005. Results of the December test, combined with a technology assessment, led program officials to pursue a low-risk, high-performance liquid fueled divert and attitude control system. The MKV program will continue to explore other divert and attitude control system technologies for future use. The THAAD program achieved its primary fiscal year 2006 test objectives, although it did experience test delays. The program planned to conduct five flight tests during fiscal year 2006, but was only able to execute four. During the program’s first two flight tests, program officials demonstrated the missile’s performance, including the operation of the missile’s divert and attitude control system and the control of its kill vehicle. The third flight test conducted in July 2006 demonstrated THAAD’s ability to successfully locate and intercept a target, a primary 2006 test objective. The fourth THAAD flight test was declared a “no-test” after the target malfunctioned shortly after its launch, forcing program officials to terminate the test. THAAD officials told us that the aborted test will be deleted from the test schedule and any objectives of the test that have not been satisfied will be rolled-up into future tests. The program planned to conduct its fifth (missile only) flight test–to demonstrate the missile’s performance in the low atmosphere–in December 2006. However, due to reprioritization in test flights, the fifth flight test is now scheduled for the second quarter of fiscal year 2007. Flight test 6, the next scheduled flight test, was successfully conducted at the end of January 2007. It was the first flight test performed at the Pacific Missile Range. In March 2005, MDA set performance goals for Block 2006 that included a numerical goal for the probability of a successful BMDS engagement, a defined area from which the BMDS would prevent an enemy from launching a ballistic missile, and a defined area that the BMDS would protect from ballistic missile attacks. In March 2006, MDA altered its Block 2006 performance goals commensurate with reductions in Block 2006 fielded assets. Although MDA revised its goal downward, insufficient data exists to assess whether MDA is on track to meet its new goal. MDA uses the WILMA model to predict overall BMDS performance even though this model has not been validated or verified by DOD’s Operational Test Agency. According to Operational Test Agency officials, WILMA is a legacy model that does not have sufficient fidelity for BMDS performance analysis. MDA officials told us the agency is working to develop an improved model that can be matured as the system matures. In addition, the GMD program has not completed sufficient flight testing to provide a high level of confidence that the BMDS can reliably intercept ICBMs. In September 2006, the GMD program completed an end-to-end test of one engagement sequence that the GMD element might carry out. While this test provided some assurance that the element will work as intended, the program must test other engagement sequences, which would include other GMD assets that have not yet participated in an end- to-end flight test. Additionally, independent test agencies told us that additional flight tests are needed to have a high level of confidence that GMD can repeatedly intercept incoming ICBMs. Additional tests are also needed to demonstrate that the GMD element can use long-range surveillance and tracking data developed by the Aegis BMD element. In March 2006, we reported that Aegis BMD was unable to participate in a GMD flight test, which prevented MDA from exercising Aegis BMD’s long- range surveillance and tracking capability in a manner consistent with an actual defensive mission. The program office told us that the Aegis BMD is capable of performing this function and has demonstrated its ability to surveil and track ICBMs in several exercises. Additionally, Aegis BMD has shown that it can communicate this data to GMD in real time. However, because of other testing priorities, GMD has not used this data to prepare a weapon system task plan in real time. Rather GMD developed the plan in post-test activities. Officials in the Office of the Director for Operational Test and Evaluation told us that having GMD prepare the test plan in real time would provide the data needed to more accurately gauge BMDS performance. Delayed testing and technical problems may also impact the performance of the system and the timeliness of future enhancements to the fielded system. For example, the performance of the new configuration of the Aegis BMD SM-3 missile is unproven because design changes in the missile’s solid divert and attitude control system and one burn pattern of the third stage rocket motor, according to program officials, were not flight tested before they were cut into the production line. MDA is considering a full flight test of the pulsed solid divert and attitude control system during the third quarter of fiscal year 2007. The solid divert and attitude control system is needed to increase the missile’s ability to divert into its designated target and counter more complex threats. The zero pulse-mode of the missile’s third stage rocket motor, which is expected to provide a capability against a limited set of threat scenarios, will not be fully tested until fiscal year 2009. Confidence in the performance of the BMDS is also reduced because the GMD element continues to struggle with technical issues affecting the reliability of some GMD interceptors. For example, GMD officials told us that the element has experienced one anomaly during each of its flight tests since its first flight test conducted in 1999. This anomaly has not yet prevented the program from achieving any of its primary test objectives; but, to date, neither its source nor solution has been clearly identified or defined. Program officials plan to continue their assessment of current and future test data to identify the root cause of the problem. The reliability of emplaced GMD interceptors also remains uncertain because inadequate mission assurance/quality control procedures may have allowed less reliable or inappropriate parts to be incorporated into the manufacturing process. Program officials plan to replace these parts in the manufacturing process, but not until interceptor 18. The program plans to begin retrofitting the previous 17 interceptors in fiscal year 2009. According to GMD officials, the cost of retrofitting the interceptors will be at least $65.5 million and could be more if replacement of some parts proves more difficult than initially expected. The ABL program also experienced a number of technical problems during fiscal year 2006 that delayed future decisions for the BMDS program. As previously noted, the program’s 2008 lethality demonstration will be delayed until 2009. The delay is caused by Beam Control/Fire Control (BC/FC) software, integration, and testing difficulties and unexpected hardware failures. According to contractor reports, additional software tests were needed because changes were made to the tested versions, the software included basic logic errors, and unanticipated problems were caused by differences in the software development laboratory and ABL aircraft environments. MDA enjoys a significant amount of flexibility in developing the BMDS, but it comes at the cost of transparency and accountability. Because the BMDS program has not formally entered the system development and demonstration phase of the acquisition cycle, it is not yet required to apply several important oversight mechanisms contained in certain acquisition laws that, among other things, provide transparency into program progress and decisions. This has enabled MDA to be agile in decision making and to field an initial BMDS capability quickly. On the other hand, MDA operates with considerable autonomy to change goals and plans, making it difficult to reconcile outcomes with original expectations and to determine the actual cost of each block and of individual operational assets. Past Congresses have established a framework of laws that make major defense acquisition programs accountable for their planned outcomes and cost, give decision makers a means to conduct oversight, and ensure some level of independent program review. The threshold application of these acquisition laws is typically triggered by a program’s entry into system development and demonstration—a phase during which the weapon system is designed and then demonstrated in tests. The BMDS has not entered into system development and demonstration because it is being developed outside DOD’s normal acquisition cycle. To provide accountability, major defense acquisition programs are required by statute to document program goals in an acquisition program baseline that, as implemented by DOD, has been approved by a higher- level DOD official prior to the program’s initiation. The baseline, derived from the users’ best estimates of cost, schedule, and performance requirements, provides decision makers with the program’s total cost for an increment of work, average unit costs for assets to be delivered, the date that an initial operational capability will be fielded, and the weapon’s intended performance parameters. The baseline is considered the program’s initial business case–evidence that the concept of the program can be developed and produced within existing resources. Once approved, major acquisition programs are required to measure their program against the baseline or to obtain approval from a higher-level acquisition executive before making significant changes. Programs are also required to regularly provide detailed program status information to Congress, including information on program cost, in Selected Acquisition Reports. In addition, Congress has established a cost monitoring mechanism that requires programs to report significant increases in unit cost measured from the program baseline. Other statutes ensure that DOD provides some independent program verification external to the program. Title 10, United States Code (U.S.C.), section 2434 prohibits the Secretary of Defense from approving system development and demonstration, or production and deployment, of a major defense acquisition program unless an independent estimate of the program’s full life-cycle cost has been considered by the Secretary. The independent verification of a program’s cost estimate allows decision makers to gauge whether the program is executable given other budget demands and it increases the likelihood that a program can execute its plan within estimated costs. In addition, 10 U.S.C. § 2399 requires completion of initial operational test and evaluation of a weapon system before a program can begin full-rate production. The Director of Operational Test and Evaluation, a DOD office independent of the acquisition program, not only approves the adequacy of the test plan and its subsequent evaluation, but also reports to the Secretary of Defense whether the test and evaluation were adequate and whether the test’s results confirm that the items are effective and suitable for combat. By law, appropriations are to be applied only to the objects for which the appropriations were made except as otherwise provided by law. Research and development appropriations are typically specified by Congress to be used to pay the expenses of basic and applied scientific research, development, test, and evaluation. On the other hand, procurement appropriations are, in general, specified by Congress to be used for the purchase of weapon systems and equipment, that is, production or manufacturing. In the 1950s, Congress established a policy that items being purchased with procurement funds be fully funded in the year that the item is procured. This policy is meant to prevent a program from incrementally funding the purchase of operational systems. According to the Congressional Research Service, “incremental funding fell out of favor because opponents believed it could make the total procurement costs of weapons and equipment more difficult for Congress to understand and track, create a potential for DOD to start procurement of an item without necessarily stating its total cost to Congress, permit one Congress to ‘tie the hands’ of future Congresses, and increase weapon procurement costs by exposing weapons under construction to uneconomic start-up and stop costs.” Congress continues to enact legislation that improves program transparency. In 2006, Congress added 10 U.S.C. § 2366a, which prohibits programs from entering system development and demonstration until certain certifications are made. For example, the decision authority for the program must certify that the program has a high likelihood of accomplishing its intended mission and that the program is affordable considering unit cost, total acquisition cost, and the resources available during the year’s covered by DOD’s future years defense program. Similar to other government programs, one of the laws affecting MDA decisions is the Antideficiency Act. The fundamental concept of the Antideficiency Act is to ensure that spending does not exceed appropriated funds. The act is one of the major laws in which Congress exercises its constitutional control of the public purse. The fiscal principles underlying the Antideficiency Act are quite simple. Government officials may not make payments, or commit the United States to make payments at some future time, for goods or services unless the available appropriation is sufficient to cover the cost in full. To ensure that it is always in compliance with this law, MDA adjusts its goals and defers work as needed to execute the BMDS within its available budget. In 2001, DOD conducted extensive missile defense reviews to decide how best to defend the United States, deployed troops, friends, and allies from ballistic missile attacks. The studies determined that DOD needed to find new approaches to acquire and deploy missile defenses. Flexibility was one of the hallmarks of the new approach that DOD chose to implement. One flexibility accorded MDA was the authority to develop the BMDS outside of DOD’s normal acquisition cycle, by not formally entering the system development and demonstration phase. This effectively enabled MDA to defer application of certain acquisition laws until the agency transfers a fully developed capability to a military service for production, operation, and sustainment—the point at which DOD directed that the BMDS program reenter the acquisition cycle. At that point, basic development and initial fielding would generally be complete. Because MDA currently does not have to apply many of the oversight requirements for major defense acquisition programs directed by acquisition laws, the BMDS program operates with unusual autonomy. In 2002, the Under Secretary of Defense for Acquisition, Technology, and Logistics delegated to MDA the authority to establish its own baseline and make changes to that baseline without approval outside of MDA. Because it has not formally entered system development and demonstration, MDA can also initiate a block of capability and move forward with its fielding without an independent cost estimate or an independent test of the effectiveness and suitability of assets intended for operational use. The ability to make decisions on its own and proceed without independent verifications reduces decision timelines, making the BMDS program more agile than other DOD programs. MDA’s ability to quickly field a missile defense capability is also enhanced by its ability to field the BMDS before all testing is complete. MDA considers the assets it has fielded to be developmental assets and not the result of the production phase of the acquisition cycle. Because MDA has not advanced the BMDS or its elements into the acquisition cycle, it is continuing to produce and field assets without completing the operational test and evaluation normally required by 10 U.S.C. § 2399 before full-rate production. For example, MDA has acquired and emplaced 14 ground- based interceptors for operational use before both developmental and operational testing is completed. The agency’s strategy is to continue developmental testing while fielding assets and to also incorporate operational realism into these tests so that the Director of Operational Test and Evaluation can make an operational assessment of the fielded assets’ capability. Because all of MDA’s funding comes from the Research, Development, Test, and Evaluation appropriation account, MDA enjoys greater flexibility in how it can use funds compared to a traditional DOD acquisition program where funding is typically divided into research, development, and evaluation, procurement, and operations and maintenance. This is particularly true of an element. For example, a Block 2006 element like GMD covers a wide range of activities, from research and development on future enhancements to the fabrication of interceptors for operations. If the GMD program runs into problems with one activity, it can defer work on another to cover the cost of the problems. MDA’s flexibility to change goals for each element complements the flexibility in how it uses its funds. After a new block of the BMDS has been presented in the budget, MDA can change the outcomes–in terms of planned delivery of assets and other work activities–that are expected of the block. While this freedom enables MDA to operate within its budget, it decouples the activities actually completed from the activities that were budgeted, making it difficult to assess the value of what is actually accomplished. For example, between 2003 and mid-2005, MDA changed its Block 2004 delivery goals three times, progressively decreasing the number of assets planned for the block when it was initially approved for funding. This trend has continued into Block 2006, with the agency changing its delivery plans once since it presented its initial Block 2006 goals to Congress. MDA is required to report such changes only if MDA’s Director considers the changes significant. In addition to deferring the delivery of assets from one block to another, MDA also has the flexibility to defer other work activities from a current to a future block. This creates a rolling scope, making it difficult to keep track of what an individual block is responsible for delivering. For example, during Block 2004, MDA deferred some planned development, deployment, characterization, and verification activities until Block 2006 so that it could cover contractor budget overruns. MDA is unable to determine exactly how much work was deferred. However, according to a November 2006 report to Congress, MDA found it necessary to defer the work until Block 2006 to make Block 2004 funding available to implement a new GMD test strategy following two GMD flight test failures, resolve quality issues associated with GMD interceptors and its exoatmospheric kill vehicle, and add an FBX-T radar to the initial deployed capability. Agency officials are already anticipating the deferral of work from Block 2006 into Block 2008. In fiscal year 2006, the work of five of the six contractors responsible for elements included in Block 2006 cost more than expected. Given program funding limits, MDA officials told us that they will either have to defer work or request additional funds from Congress during the remaining years of the block. MDA did not increase its fiscal year 2007 budget request; therefore, it is likely that the agency will once again have to defer some planned work into the next block. Not only do changes in a block’s work plan make it difficult to know what outcomes the program expects to achieve, the changes also have the potential to impact the BMDS’ performance. For example, by decreasing the number of fielded interceptors, MDA decreases the likelihood that it can defeat enemy missiles if multiple threats are prevalent because the number of available interceptors will be limited. In addition, if activities, such as testing and validation, are not complete when assets are fielded, the assets may not perform as expected and changes may be needed. This effect of early fielding was seen in Block 2004 when GMD interceptors were fielded before testing was complete. Later tests showed that the interceptors may contain unreliable parts, some of which MDA now plans to replace. Although acquisition laws governing major defense acquisition programs as well as DOD acquisition policy recognize the need for independent program reviews, few such reviews are part of the BMDS program. This has contributed to the difficulty in assessing MDA’s progress toward expected outcomes. As described above, major programs are required by law to have an independent cost estimate (performed by the DOD Cost Analysis Improvement Group) for entry into system development and demonstration, as well as production and deployment. According to MDA officials, MDA has so far obtained an independent assessment of only one BMDS element’s life-cycle cost estimate—Aegis BMD’s estimate for Block 2004. In our opinion, without a full independent cost estimate, MDA has established optimistic block goals that could not be met. This is supported by an MDA spokesman’s statement that the agency’s optimism in establishing Block 2004 cost and quantity goals contributed to several goal changes. According to MDA officials, unlike its action on its Block 2004 cost goal, MDA did not request an assessment of MDA’s Block 2006 goal. Further, DOD policy calls for a milestone decision authority with overall responsibility for the program that is independent of the program. Although the Director reports to the Under Secretary of Defense for Acquisition, Technology, and Logistics and keeps the Under Secretary and congressional defense committees informed of MDA decisions, MDA’s Director is authorized to make most program decisions without prior approval from a higher-level authority. The Under Secretary of Defense delegated this authority to the Director in a February 2002 memorandum. The Secretary of Defense also appointed MDA’s Director as both the BMDS Program Manager and its Acquisition Executive (including the authority to serve as milestone decision authority until an element is transferred out of MDA). As the Acquisition Executive, the Director was given responsibility for establishing programmatic policy and conducting all research and development of the BMDS. This delegation included responsibility for formulating BMDS acquisition strategy, making program commitments and terminations, deciding on affordability trade-offs, and baselining the capability and configuration of blocks and elements. Because MDA can redefine outcomes, the actual cost of a block cannot be compared with the cost originally estimated. MDA considers the cost of deferred work—which may be the delayed delivery of assets or other work activities—as a cost of the block in which the work is performed, even though the work benefits and was planned for a prior block. Further, MDA does not track the cost of deferred work from one block to the next and, therefore, cannot make adjustments that would match the cost with the block it benefits. For example, in March 2006, we reported that MDA deferred some Block 2004 work until Block 2006 so that it could use the funds appropriated for that work to cover unexpected cost increases caused by such problems as poor quality control procedures and technical problems during development, testing, and production. MDA officials told us that additional funds have been, or will be, requested during Block 2006 to carry out the work. However, the officials could not tell us how much of the Block 2006 budget is attributable to the deferred work. These actions caused Block 2004 cost to be understated and Block 2006 cost to be overstated. In addition, if MDA delays some Block 2006 work until Block 2008, as expected, Block 2006 cost will become more difficult to compare with its original estimate as the cost of the deferred work will no longer count against the block. The Director, MDA, determines whether he reports the cost of work being deferred to future blocks and, so far, has not done so. The planned and actual unit costs of assets being acquired for operational use are equally hard to determine. Because the BMDS and its elements are a single major defense acquisition program that has not officially entered into system development and demonstration, it is not required to provide the detailed reports to Congress directed by statute. While it is possible to reconstruct planned unit costs from budget documents, the planned unit cost of some assets—for example, GMD interceptors—is not easy to determine because the research and development funds used to buy the interceptors are spread across 3 to 5 budget years. Also, because MDA is not required to report significant increases in unit cost, it is not easy to determine whether an asset’s actual cost has increased significantly from its expected cost. For example, we were unable to compare the actual and planned cost of a GMD interceptor. By comparison, the Navy provides more transparency in reporting on the cost of ships, some of which are incrementally funded with procurement funds. When a Navy ship program overruns the cost estimate used to justify the budget, the Navy identifies the additional funding needed to cover the overruns separately from other shipbuilding programs. Using research and development funds to purchase fielded assets further reduces cost transparency because these dollars are not covered by the full-funding policy for procurement. Therefore, when the program for a 2- year block is first presented in the budget, Congress is not necessarily fully aware of the dimensions and cost of that block. Although a particular block may call for the delivery of a specific number of interceptors, the full cost of those interceptors may not be contained in that block. In addition, incremental funding has the potential to “tie the hands” of future Congresses to finish funding for assets started in prior years. Otherwise, Congress could run the risk of a production stoppage and the increased costs associated with restarting the production line. During Block 2004, poor quality control procedures that MDA officials attribute to acquisition streamlining and schedule pressures caused the missile defense program to experience test failures and slowed production. MDA has initiated a number of actions to correct its quality control weaknesses and those actions have been largely successful. Although MDA continues to identify quality control procedures that need improvement, the number of deficiencies has declined and contractors are responding to MDA’s improvement efforts. These efforts include a teaming approach designed to restore the reliability of MDA’s suppliers, regular quality inspections to quickly identify and find resolutions for quality problems, and award fees with an increased emphasis on quality assurance. In addition, MDA’s attempts to improve quality assurance have attracted the interest of other government agencies and contractors. MDA is leading quality improvement conferences and co-sponsoring a Space Quality Improvement Council. Officials in MDA’s Office of Quality, Safety, and Mission Assurance and in GMD’s Program Office attribute the weaknesses in MDA’s quality control processes to acquisition streamlining and schedule pressures. According to a former DOD Director of Operational Test and Evaluation, during the early 1990’s there was a common goal for DOD management to streamline the acquisition process to reduce burgeoning costs of new weapons. By streamlining the process, DOD commissions and task forces hoped to drastically cut system development and production time and reduce costs by eliminating management layers, eliminating certain reporting requirements, using more commercial-off-the-shelf systems and subsystems, reducing oversight from within as well as from outside DOD, and by eliminating perceived duplication of testing. In addition to acquisition streamlining, schedule pressures caused MDA to be less attentive to quality assurance issues. This was particularly true for the GMD element that was tasked with completing development and producing assets for operational use within 2 years of a Presidential directive to begin fielding an initial missile defense capability. While the GMD program had realized for some time that its quality controls needed to be strengthened, the program’s accelerated schedule left little time to address quality problems. MDA has initiated a number of mechanisms to rectify the quality control weaknesses identified in the BMDS program. For example, as early as 2003, MDA, in concert with industry partners, Boeing, Lockheed Martin, Raytheon, and Orbital Sciences began a teaming approach to restore reliability in a key supplier. In exchange for allowing the supplier to report to a single customer—MDA—the supplier gave MDA’s Office of Quality, Safety, and Mission Assurance authority to make a critical assessment of the supplier’s processes. This assessment determined that the supplier’s manufacturing processes lacked discipline, its corrective action procedures were ineffective, its technical data package was inadequate, and personnel were not properly trained. The supplier responded by hiring a Quality Assurance Director, five quality assurance professionals, a training manager, and a scheduler. In addition, the supplier installed an electronic problem reporting database, formed new boards—such as a failure review board—established a new configuration management system, and ensured that manufacturing activity was consistent with contract requirements. According to MDA, by 2005, these changes began to produce results. Between March 2004 and September 2005, test failures declined by 43 percent. In addition, open quality control issues decreased by 64 percent between September 2005 and August 2006 and on-time deliveries increased by 9 percent between March 2005 and August 2006. MDA’s teaming approach was expanded in 2006 to another problem supplier and many systemic solutions are already underway. MDA also continues to carry-out regular contractor quality inspections. For example, during fiscal year 2006, MDA completed quality audits of 6 contractors and identified a total of 372 deficiencies and observations. As of December 2006, the contractors had closed 157 or 42 percent of all audit findings. These audits are also producing other signs of quality assurance improvements. For example, after an August 2006 review of Raytheon’s production of the last five GMD exoatmospheric kill vehicles, MDA auditors reported less variability in Raytheon’s production processes, increasing stability in its statistical process control data, fewer test problem reports and product waivers, compliance with manufacturing “clean room” requirements, and a sustained improvement in product quality. Because of the emphasis placed on the recognition of quality problems, Raytheon is conducting regular inspections independently of MDA to identify problems. Over the course of 2006, MDA also continued to incorporate MDA Assurance Provisions (MAP) into its prime contracts. The MAP provides MDA methods to measure, verify, and validate mission success through the collection of metrics, risk assessment, technical evaluations, independent assessments, and reviews. Four BMDS elements–BMDS Sensors, C2BMC, KEI, and THAAD–modified their contracts during 2006 to incorporate the MAP. The remaining five BMDS elements have not yet included the plan on their contracts because the contract is mostly in compliance with the MAP or because of the timing and additional costs of adding the requirements. MDA also encourages better quality assurance programs and contractors’ implementation of best practices through award fee plans. In 2003, three BMDS elements–BMDS Sensors, KEI, and THAAD–revised their contracts to include 25 MAP criteria in their award fee plans. For example, the BMDS Sensors element included system quality, reliability, and configuration control of data products as part of its award fee criteria for its FBX-T contract. Contractors are also bringing their best practices to the table. For example, in an effort to prevent foreign object debris in components under assembly, Raytheon and Orbital Sciences have placed all tools in special tool boxes known as shadow boxes. Raytheon has also incorporated equipment into the production process that handles critical components, removing the possibility that the components will be dropped or mishandled by production personnel. Because of its quality assurance efforts, contractors and other government agencies have called on MDA to lead quality conferences and sponsor an improvement council. MDA’s Office of Quality, Safety, and Mission Assurance was co-sponsor of a conference on quality in the space and defense industry and the office’s Director has also served as panel discussion chair at numerous other conferences. The conferences focus on the safety, reliability, and quality aspects of all industries and agencies involved in defense and space exploration. MDA is also a co-sponsor of the Space Quality Improvement Council, a council established to cooperatively address critical issues in the development, acquisition, and deployment of national security space systems. Contractors are also adopting some MDA methods for improving quality assurance. For example, Raytheon Integrated Defense Systems has adopted the MAP as a performance standard for all of its defense programs. In a general sense, our assessment of MDA’s progress on missile defense is similar to that of previous years: accomplishments have been made and capability has been increased, but costs have grown and the scope of planned work has been reduced. The fielding of additional assets, the ability to put BMDS on alert status, and the first end-to-end test of GMD were notable accomplishments during fiscal year 2006. On the other hand, it is not easy to answer the question of how well BMDS is progressing relative to the funds it has received and goals it has set for those funds. As with previous years, we have found it difficult to reconcile the progress made in Block 2006 with the original cost and scope of the program. The block concept, while a useful construct for harvesting and fielding capability incrementally, is a muddy construct for accountability. Although BMDS is managed within a relatively level budget of about $10 billion a year, the scope of planned work is altered several times each year. Consequently, work travels from one block to another, weakening the connection between the actual cost and scope of work done and the estimated cost and scope of work used to justify budget requests. Block 2006 is a case in point. Compared with its original budget justification, it now contains unanticipated work from Block 2004 but has deferred some of its own planned work to future blocks. Costs for the THAAD element are no longer being counted in Block 2006 although they were last year. Some developmental elements that will be fielded in later blocks, such as KEI and MKV, are not considered part of Block 2006, while ABL, which is also a developmental element to be fielded in later blocks, is considered part of Block 2006. Establishing planned and actual costs for individual assets is also elusive because MDA’s development of the BMDS outside of DOD’s acquisition cycle blurs the audit trail. Using research and development funds—funds that are not covered by the full-funding policy—contributes to the difficulty in determining some assets’ cost. None of the foregoing is to suggest that MDA has acted inconsistently with the authorities it has been granted. Indeed, by virtue of its not having formally begun system development and demonstration, coupled with its authority to use research and development funds to manufacture and field assets, MDA has the sanctioned flexibility to manage exactly as it has. It could be argued that without this latitude, the initial capability fielded last year and put on alert this year would not have been possible. Yet, the question remains as to whether this degree of flexibility should be retained on a program that will spend about $10 billion a year for the foreseeable future. It does not seem unreasonable to expect a program of this magnitude to be held to a higher standard of accountability than delivering some capability within budgeted funds. In fact, the program is likely to undergo greater scrutiny as DOD faces increasing pressure to make funding trade-offs between its investment portfolios, ongoing military operations, and recapitalization of its current weapon systems. Within BMDS, key decisions lie ahead for DOD. Perhaps the most significant decision in the next 2 years will be to determine what investments should be made in the two boost phase elements—ABL and KEI—under development. This decision would benefit greatly from good data on actual versus expected performance, actual versus expected cost, and independent assessments of both cost and performance. The recommendations that follow build upon those we made in last year’s report on missile defense. In general, those recommendations called for the Secretary of Defense to align individual BMDS elements around a knowledge-based strategy and to determine whether a block approach to fielding was compatible with such a strategy. To increase transparency in the missile defense program, we recommend that the Secretary of Defense: Develop a firm cost, schedule, and performance baseline for those elements considered far enough along to be in system development and demonstration, and report against that baseline. Propose an approach for those same elements that provides information consistent with the acquisition laws that govern baselines and unit cost reporting, independent cost estimates, and operational test and evaluation for major DOD programs. Such an approach could provide necessary information while preserving the MDA Director’s flexibility to make decisions. Include in blocks only those elements that will field capabilities during the block period and develop a firm cost, schedule, and performance baseline for that block capability including the unit cost of its assets. Request and use procurement funds, rather than research, development, test, and evaluation funds, to acquire fielded assets. Conduct an independent evaluation of ABL and KEI after key demonstrations, now scheduled for 2008 and 2009, to inform decisions on the future of the two programs. DOD’s comments on our draft report are reprinted in appendix I. DOD partially concurred with our first three recommendations and non- concurred with the last two. In partially concurring with the first recommendation, DOD recognized the need for greater program transparency, but objected to implementing an element-centric approach to reporting, believing that this would detract from managing the BMDS as a single integrated system. We agree that management of the BMDS as a single, integrated program should be preserved. However, since DOD already requests funding and awards contracts by the individual elements that compose the BMDS, we believe that establishing a baseline for those elements far enough along to be considered in system development and demonstration provides the best basis for transparency of actual performance. This would not change DOD’s approach to managing the BMDS, because merely reporting the cost and performance of individual elements would not cause each element to become a major defense acquisition program. DOD stated that MDA intends to modify its current biennial block approach that is used to define reporting baselines. In making this change, MDA states that it intends to work with both Congress and GAO to ensure that its new approach provides useful information for accountability purposes. At this point, we believe that the information needed to define a reporting baseline for a block would best be derived from individual elements. That having been said, a discourse can be had on whether elements are the only way to achieve the needed transparency and we welcome the opportunity to work toward constructive changes. DOD also partially concurred with our second recommendation that BMDS elements effectively in system development and demonstration provide information consistent with the acquisition laws that govern baselines and unit cost reporting, independent cost estimates, and operational test and evaluation for major programs. DOD did commit to providing additional information to Congress to promote accountability, consistency, and transparency. Nonetheless, DOD remains concerned that having elements, rather than the BMD system, report according to these laws will have a fragmenting effect on the development of an integrated system and put more emphasis on individual programs as though each is a major defense acquisition program. We believe that greater transparency into the BMDS program depends on DOD reporting in the same manner that it requests program funding. This ensures that decision makers can reconcile the expected cost and performance of assets DOD plans to acquire with actual cost and performance. We recognize that MDA does provide Congress with information on cost and testing, but this information is not of the caliber or consistency called for by acquisition laws. DOD stated that our third recommendation on reporting at the BMDS-level appears to be inconsistent with our recommendations on reporting at the element level. The basis for our third recommendation is that a block, which is a construct to describe and manage a defined BMDS-wide capability, must be derived from the capabilities that individual elements can yield. Except for activities like integrated tests that involve multiple elements, the cost, schedule, and performance of the individual assets to be delivered in a block come from the elements. Further, those elements that are not far enough along to deliver assets or capabilities within a particular block should not be considered part of that block. We believe that as MDA works to modify its current biennial block approach, it needs to be clearer and more consistent about what is and is not included in a block and that the cost, schedule, and performance of the specific assets in the block should be derived from the information already generated by the elements. DOD did not concur with our recommendation that it request and use procurement funds to acquire fielded assets. It noted that the flexibility provided by Research, Development, Test, and Evaluation funding is necessary to develop and acquire new capabilities quickly that can respond to new and unexpected ballistic missile threats. We recognize the need to be able to respond to such threats. However, other DOD programs are also faced with unexpected threats that must be addressed quickly and have found ways to do so while acquiring operational assets with procurements funds. If MDA requires more flexibility than other programs, there should be a reasonable budgetary accommodation available other than funding the entire budget with Research, Development, Test, and Evaluation funds. More needs to be done to get a better balance between flexibility and transparency. Thus, we continue to believe that decision makers should be informed of the full cost of assets at the time DOD is asking for approval to acquire them and that procurement funds are the best way to provide that transparency. DOD also did not concur with our fifth recommendation to conduct an independent evaluation of ABL and KEI to inform the upcoming decisions on these programs. It believes that MDA’s current integrated development and decision-making approach should continue as planned. We continue to believe that MDA would benefit from an independent evaluation of both ABL and KEI. However, we do believe such an evaluation should be based on the results of the key demonstrations planned for the elements in 2008 and 2009. We have modified our recommendation accordingly. We are sending copies of this report to the Secretary of Defense and to the Director, MDA. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff, have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix IV. Like other government agencies, MDA acquires the supplies and services needed to fulfill its mission by awarding contracts. Two types of contracts are prevalent at MDA—contracts for support services and contracts for hardware. The contractors that support MDA’s mission are commonly known as support contractors, while the contractors that are responsible for developing elements of the Ballistic Missile Defense System (BMDS) are called prime contractors. According to MDA’s manpower database, about 8,186 personnel positions—not counting prime contractors—currently support the missile defense program. These positions are filled by government civilian and military employees, contract support employees, employees of federally funded research and development centers (FFRDC), researchers in university and affiliated research centers, and a small number of executives on loan from other organizations. At least 94 percent of the 8,186 positions are paid by MDA through its research and development appropriation. Of this 94 percent, only about 33 percent, or 2,578 positions, are set aside for government civilian personnel. Another 57 percent, or 4,368 positions, are support contractors that are supplied by 44 different defense companies. The remaining 10 percent are positions either being filled, or expected to be filled, by employees of FFRDCs and university and affiliated research centers that are on contract or under other types of agreements to perform missile defense tasks. Table 7 illustrates the job functions that contract employees carry out. MDA officials explained that the utilization of support contractors is key to its operation of the BMDS because it allows the agency to obtain necessary personnel and develop weapon systems more quickly. Additionally, the officials told us that its approach is consistent with federal government policy on the use of contractors. MDA officials estimate that while the average cost of the agency’s government employee is about $140,000 per year, a contract employee costs about $175,000 per year. Table 8 highlights the staffing levels for each BMDS element. Prime contractors developing elements of the Ballistic Missile Defense System (BMDS) typically receive most of the funds MDA requests from Congress each fiscal year. The efforts of prime contractors may be obtained through a wide range of contract types. Because MDA is requiring its prime contractors to perform work that includes enough uncertainty that the cost of the work cannot be accurately estimated, all of the agency’s prime contracts are cost reimbursement arrangements. Under a cost reimbursement contract, a contractor is paid for reasonable, allowable, and allocable costs incurred in performing the work directed by the government to the extent provided in the contract. The contract includes an estimate of the work’s total cost for the purpose of obligating funds and establishes a ceiling cost that the contractor may not exceed without the approval of the contracting officer. Many of the cost reimbursement contracts awarded by MDA include an award fee. Cost plus award fee contracts provide for a fee consisting of a base amount, which may be zero, that is fixed at the inception of the contract and an award amount, based upon a subjective evaluation by the government, that is meant to encourage exceptional performance. The amount of the award fee is determined by the government’s assessment of the contractor’s performance compared to criteria stated in the contract. This evaluation is conducted at stated intervals during performance, so that the contractor can be periodically informed of the quality of its performance and, if necessary, areas in which improvement are required. Two of the cost reimbursement contracts shown in table 9—MKV and C2BMC—differ somewhat from other elements’ cost reimbursement contracts. The MKV prime contract is an indefinite delivery/indefinite quantity cost-reimbursement arrangement. This type of contract allows the government to direct work through a series of task orders. Such a contract does not procure or specify a firm quantity of services (other than a minimum or maximum quantity). This contracting approach permits MDA to order services as they are needed after requirements materialize and provides the government with flexibility because the tasks can be aligned commensurate with available funding. Since the MKV element is relatively new to the BMDS, its funding is less predictable than other elements’ and the ability to decrease or increase funding on the contract each year is important to effectively manage the program. The C2BMC element operates under an Other Transaction Agreement that is not subject to many procurement laws and regulations. However, even though an Other Transaction Agreement is not required to include all of the standard terms and conditions meant to safeguard the government, the C2BMC agreement was written to include similar clauses and provisions. We found no evidence at this time that the C2BMC agreement does not adequately protect MDA’s interests. MDA chose the Other Transaction Agreement to facilitate a collaborative relationship between industry, government, federally funded research and development centers, and university research centers. Contract officials told us that a contract awarded under the Federal Acquisition Regulation is normally regarded as an arms-length transaction in which the government gives the contractor a task that the contractor performs autonomously. While an important purpose of an Other Transaction Agreement is to broaden DOD’s technology and industrial base by allowing the development and use of instruments that reduce barriers to participation in defense research by commercial firms that traditionally have not done business with the government, the agreements’ value in encouraging more collaborative environments is also recognized. Table 9 outlines the contractual instruments that MDA uses to procure the services of its prime contractors. Excluding the C2BMC and MKV elements, MDA budgeted approximately $3 billion for its prime contractors to execute planned work during fiscal year 2006. To determine if these contractors are executing the work planned within the funds and time budgeted, each BMDS program office requires its prime contractor to provide monthly reports detailing cost and schedule performance. In these reports, which are known as Contract Performance Reports, the prime contractor makes comparisons that inform the program as to whether the contractor is completing work at the cost budgeted and whether the work scheduled is being completed on time. If the contractor does not use all funds budgeted or completes more work than planned, the report shows positive cost and/or schedule variances. Similarly, if the contractor uses more money than planned or cannot complete all of the work scheduled, the report shows negative cost and/or schedule variances. A contractor can also have mixed performance. That is, the contractor may spend more money than planned (a negative cost variance) but complete more work than scheduled (a positive schedule variance). Using data from Contract Performance Reports, a program manager can assess trends in cost and schedule performance, information that is useful because trends tend to persist. Studies have shown that once a contract is 15 percent complete, performance metrics are indicative of the contract’s final outcome. We used contract performance report data to assess the fiscal year 2006 cost and schedule performance of prime contractors for seven of the nine BMDS elements being developed by MDA. When possible, we also predicted the likely cost of each prime contract at completion. Our predictions of final contract cost are based on the assumption that the contractor will continue to perform in the future as it has in the past. An assessment of each element is provided below. The Aegis BMD program has awarded a prime contract for each of its two major components—the Aegis BMD Weapon System and the Standard Missile-3. During fiscal year 2006, the work of both prime contractors cost a little more than expected, but only the weapon system contractor was slightly behind schedule. Even though the weapon system contractor was unable to perform fiscal year 2006 work at the planned cost, its cumulative cost performance remains positive because of good performance in prior years. At year’s end, the weapon system contract had a cumulative favorable cost variance of $0.1 million, but an unfavorable cumulative schedule variance of $0.8 million. As shown in figure 1, the contractor’s cost and schedule performance fluctuated significantly throughout the year. The decline in the Aegis BMD Weapon System contractor’s cost performance began shortly after the contractor adjusted its cost and schedule baseline in September 2005. At that time, the contractor corrected its baseline to account for a December 2004 DOD budget cut. However, it did not make adjustments to the baseline to incorporate new work that the government directed. This caused the contractor’s cost performance to decline significantly because although the cost of the new effort was being reported, the baseline included no budget for the work. Recognizing that the contract baseline still needed to be replanned, the Director issued approval to restructure the program and rebaseline the contract in December 2005. To accommodate the work added to the contract, MDA and the contractor realigned software deliveries for Block 2006. The contractor completed the rebaselining effort in April 2006, and since then the contractor has performed within budgeted cost and schedule. Based on the contractor’s fiscal year 2006 cost performance, we estimate that at completion the contract may cost from $0.1 to $4.7 million more than anticipated. For fiscal year 2006, the Standard Missile-3 contractor incurred an unfavorable cost variance of $7.8 million and a favorable schedule variance of $0.7 million. Even though the contractor was unable to complete fiscal year 2006 work within the funds budgeted, it ended the year with a cumulative positive cost variance of $3.1 million. The cumulative positive cost variance was the result of the contractor performing 2005 work at $10.9 million less than budgeted. In addition, although the contractor performed work ahead of schedule in fiscal year 2006, it was unable to overcome a negative schedule variance of $9.6 million created in 2005 caused by delayed hardware deliveries and delayed test events. The contractor ended fiscal year 2006 with a cumulative $8.9 million negative schedule variance. Figure 2 shows cumulative variances at the beginning of fiscal year 2006 year along with a depiction of the contractor’s cost and schedule performance throughout the fiscal year. The unfavorable cost variance for fiscal year 2006 was caused by performance issues associated with the third stage rocket motor, the kinetic warhead and the missile’s guidance system. In addition, production costs associated with the Solid Divert and Attitude Control System were higher than anticipated. If the contractor continues to perform as it did in fiscal year 2006, we estimate that at completion the contract could cost from $1.9 million less than expected to $2.7 million more than expected. Our analysis of ABL’s Contract Performance Reports indicates that the prime contractor’s cost and schedule performance continued to decline during fiscal year 2006. The contractor overran its fiscal year 2006 budget by $54.8 million and did not perform $26.4 million of work on schedule. By September 2006, this resulted in an unfavorable cumulative cost variance of $77.9 million and an unfavorable cumulative schedule variance of $50 million. Figure 3 shows the decline in cost and schedule performance for the ABL prime contractor throughout fiscal year 2006. During the fiscal year, the ABL contractor needed additional time and money to solve technical challenges associated with the element’s Beam Control/Fire Control component. Software, integration, and testing difficulties caused significant delays with the component. Software problems were caused by the incorporation of numerous changes, basic logic errors, and differences between the environment of the software development laboratory and the environment aboard the aircraft. Integration and testing of the complex system and hardware failures also contributed to the delays. Together, according to ABL’s program manager, these problems caused the contractor to experience about a 3 1/2 month schedule delay that in turn delays the program’s lethality demonstration from 2008 to 2009. Also, if the contractor’s cost performance continues to decline as it did in fiscal year 2006, we estimate that at completion the contract could overrun its budget by about $112.1 million to $248.3 million. We were unable to fully evaluate the contractor’s performance for the C2BMC program because the contractor did not report all data required to conduct earned value analysis for 7 months of the fiscal year. During fiscal year 2006, the C2MBC contractor ended the Block 2004 increment or Part 3 of its Other Transaction Agreement and began work on its Block 2006 program of work, referred to as Part 4 of the agreement. The contractor completed its Block 2004 program of work (Part 3) in December 2005 and was awarded the Block 2006 increment (Part 4) on December 28, 2005. However, budget cuts prompted the program to reduce the C2BMC enhancements planned for Block 2006 and revise its agreement with the contractor. Shortly after, the program received additional funds which led to a re-negotiation of the Part 4 agreement. The new scope of work included enhancements that could not be completed within available funding. In March 2006, the program began to replan its Block 2006 increment of work (Part 4) and suspended earned value management reporting. During the replan, which occurred throughout most of fiscal year 2006, the contractor reported only actual cost data in lieu of comparing actual costs to budgeted cost. The cost of the revised agreement on the Block 2006 increment of work was negotiated in October 2006. The GMD prime contractor’s cost performance continued to decline during fiscal year 2006, but its fiscal year schedule performance improved. By September 2006, the cumulative cost of all work completed was $1.06 billion more than expected and in fiscal year 2006 alone, work cost about $347 million more than budgeted. The contractor was able to complete $90.2 million of fiscal year 2006 work ahead of schedule; but the cumulative schedule variance continued to be negative at $137.8 million. Figure 4 depicts the cost and schedule performance for the GMD contractor during fiscal year 2006. Based on its fiscal year 2006 performance, the GMD contractor could overrun the total budgeted cost of the contract by about $1.5 to $1.9 billion. The GMD program recently finished rebaselining its contract to reflect a significant program realignment to reduce program risk and to execute the program within available funding. While the new baseline was being implemented, earned value metrics, according to program officials, were significantly distorted because progress was measured against a plan of work that the program was no longer following. The contractor is in the process of developing a new contract baseline that incorporates the program’s new scope, schedule, and budget. By the end of September 2006, phase one of the new baseline covering fiscal year 2006-2007 efforts had been implemented and validated through Integrated Baseline Reviews of the prime contractor and its major subcontractors. Implementation of the phase 2 baseline covering the remaining contract effort was completed in October 2006 with the final integrated baseline reviews of the prime and major subcontractors completed by mid-December 2006. Based on the data provided by the contractor during fiscal year 2006, technical and quality issues with the exoatmospheric kill vehicle (EKV) are the leading contributors to cost overruns and schedule slips for the GMD program. In fiscal year 2006, EKV related work cost $135.2 million more than budgeted. Quality problems identified after faulty parts had been incorporated into components required rework and forced the subcontractor to increase screening tests to identify defective parts. Development issues with two boosters being developed to carry the exoatmospheric kill vehicles into space also increased costs during fiscal year 2006. The element’s Orbital Boost Vehicle experienced cost growth totaling $15.0 million while the Boost Vehicle+ booster experienced growth of $74.1 million. The Orbital Boost Vehicle’s cost grew as the need for more program management, systems engineering, and production support was required to work an extended delivery schedule. The Boost Vehicle+ contractor incurred additional costs as a result of its efforts to redesign the booster’s motors. For example, the contractor spent additional time preparing drawings and providing technical oversight of suppliers. The contractor also experienced cost growth as it readied the Sea-based X-Band radar for deployment. Maintenance, repair, and certification problems cost more than expected. In addition to making changes that an independent review team suggested were needed before the radar was made operational, the contractor had to repair an unexpected ballast leak requiring the installation of hydraulic valves and other engineering changes. GMD’s cumulative negative schedule variance is primarily caused by a subcontractor needing more time than planned to manufacture exoatmospheric kill vehicles. In addition, the prime contractor delayed planned tests because test interceptors were being produced at a slower rate. According to program officials, variances improved during fiscal year 2006 as the subcontractor delivered components on schedule. In July 2005, the KEI program modified its prime contract to require that the KEI element be capable of intercepting enemy missiles in the midcourse of their flight. Consequently, the program is rebaselining its prime contract to better align its cost and schedule objectives with the new work content. During fiscal year 2006, the contractor’s work cost approximately $0.6 million less than expected and the contractor completed about $0.6 million of work ahead of schedule. Cumulatively, the contractor’s cost performance has been positive, with all work to date being performed for $3.6 million less than budgeted. However, by year’s end, the cumulative schedule variance was a negative $5.3 million. We cannot estimate whether the total contract can be completed within budgeted cost because the contract is only 6 percent complete and trends cannot be developed until at least 15 percent of the contract is completed. Figure 5 highlights the contractor’s performance during fiscal year 2006. The KEI prime contractor was able to perform within its budgeted costs during fiscal year 2006 as a result of its efficient use of test resources. Although the contractor improved its negative schedule variance over the course of the year, its cumulative schedule variance remains unfavorable because requirements changes have delayed the development of the element’s design and of manufacturing processes. Schedule delays caused the program to postpone its element-level System Design Review originally scheduled for July 2007. However, the contractor asserts that there is no impact to the booster flight test currently scheduled for fiscal year 2008. Our analysis of the performance of the contractor developing the MKV element was limited because MDA suspended contract performance reporting in February 2006 as the program transitioned from an advanced technology development program to a system development program. The transition prompted MKV to establish a new contract baseline. Although the contractor could begin reporting after the baseline is in place, it is not issuing Contract Performance Reports until an Integrated Baseline Review is completed. Until that time, the contractor is measuring its progress against an integrated master schedule. As of September 2006, the Sensor’s contractor had underrun its fiscal year 2006 budget by $3.8 million and it was ahead in completing $5.4 million of scheduled work. Considering prior years performance, the contractor is performing under budget with a favorable cumulative cost variance of $20.2 million and ahead of schedule with a favorable cumulative schedule variance of $26.6 million. Judging from the contractor’s cost and schedule performance in fiscal year 2006, we estimate that at the contract’s completion, the contractor will underrun the budgeted cost of the contract by between $26.3 million and $44.9 million. Figure 6 shows the favorable trend in FBX-T 2006 performance. According to program officials, the cumulative favorable cost variance is driven by reduced cost in radar hardware and manufacturing created by machine process improvements and staffing efficiencies. The favorable cumulative schedule variance primarily results from a positive $17 million cumulative schedule variance brought forward from fiscal year 2005 that was created when the contractor began manufacturing radars 2 through 4 ahead of schedule. The STSS contractor’s cost and schedule performance continued to degrade during fiscal year 2006. During the fiscal year, the contractor overran budgeted costs by about $66.8 million and was unable to complete $84.1 million of work as scheduled. Combining the contractor’s performance during fiscal year 2006 with its performance in prior years, the contract has a cumulative unfavorable cost variance of approximately $163.7 million and a cumulative negative schedule variance of $104.4 million. If the contractor’s performance continues to decline, the contract could exceed its budgeted cost at completion by $567.3 million to $1.4 billion. Figure 7 depicts the cumulative cost and schedule performance of the STSS prime contractor. Quality issues at the payload subcontractor and technical difficulties encountered by the prime contractor during payload integration and testing contributed to the STSS element’s cumulative unfavorable cost and schedule variances. The first satellite’s payload experienced hardware failures when tested in a vacuum and at cold temperatures, slowing integration with the first satellite. Integration issues were also discovered as the payload was tested at successively higher levels of integration. According to program officials, the prime contractor tightened its inspection and oversight of the subcontractor responsible for integrating and testing the satellite payloads. Also, a re-education effort was undertaken to ensure that all personnel on the program knew and understood program instructions. Although the prime contractor continued to experience negative variances during the fiscal year, it should be noted that the subcontractor’s performance with respect to the second payload improved as the result of these added steps. However, the degradation of the prime contractor’s performance offset the improved performance of the subcontractor. During fiscal year 2006, the THAAD contractor expended more money and time than budgeted to accomplish planned work. During fiscal year 2006, the contractor incurred a negative cost variance of $87.9 million, which boosted the cumulative negative cost variance to $104.2 million. Similarly, the contractor did not complete $37.9 million of work scheduled for fiscal year 2006 on time. However, because the contractor completed prior years’ work ahead of schedule, the cumulative negative schedule variance was $28 million. Based on fiscal year performance, we estimate that at completion the contract could exceed its budgeted cost by between $134.7 million and $320.2 million. The THAAD prime contractor’s negative cost variance for the fiscal year can be attributed to the increased cost of missile manufacturing, re- designs, and rework, as well as launcher hardware design, integration difficulties, and software problems. However, the contractor is performing well in regard to the radar portion of the contract, which is offsetting a portion of the negative cost variance. The program’s negative schedule variance is largely driven by the missile, the launcher, and systems tests. The negative missile variance is mainly caused by problems with the Divert Attitude Control System and delays in activation of a test facility. To examine the progress MDA made in fiscal year 2006 toward its Block 2006 goals, we examined the efforts of individual programs, such as the GMD program, that are developing BMDS elements under the management of MDA. The elements included in our review collectively accounted for 72 percent of MDA’s fiscal year 2006 research and development budget request. We evaluated each element’s progress in fiscal year 2006 toward Block 2006 schedule, testing, performance, and cost goals. In making this comparison, we examined System Element Reviews, test and production schedules, test reports, and MDA briefing charts. We developed data collection instruments, which were submitted to MDA and each element program office, to gather detailed information on completed program activities including tests, prime contracts, and estimates of element performance. In addition, we visited an operational site at Vandenberg Air Force Base, California; and we visited MDA contractor facilities including Orbital Sciences Corporation in Chandler, Arizona; Raytheon in Tucson, Arizona; and Lockheed Martin in Sunnyvale, California. To understand performance issues, we talked with officials from MDA’s System’s Engineering and Integration Directorate. We also discussed fiscal year 2006 progress and performance with officials in MDA’s Agency Operations Office, each element program office, as well as the office of DOD’s Director, Operational Test and Evaluation, DOD’s office of Program Analysis and Evaluation, and DOD’s Operational Test Agency. To assess each element’s progress toward its cost goals, we reviewed Contract Performance Reports and, when available, the Defense Contract Management Agency’s analyses of these reports. We also interviewed officials from the Defense Contract Management Agency. We applied established earned value management techniques to data captured in Contract Performance Reports to determine trends and used established earned value management formulas to project the likely costs of prime contracts at completion. We reviewed each element’s prime contract and also examined fiscal year 2006 award fee plans and award fee letters. In assessing MDA’s flexibility, transparency, and accountability, we interviewed officials from the Office of the Under Secretary of Defense’s Office for Acquisition, Technology, and Logistics. We also examined Government Auditing Standards, a Congressional Research Service report, U.S. Code Title 10, DOD acquisition system policy, and the Statement of Federal Financial Accounting Standards Number 4. To determine the progress MDA has made in ensuring quality, we talked with officials from MDA’s Office of Safety, Quality, and Mission Assurance. We also held discussions with MDA’s Office of Agency Operations, and discussed quality issues at contractor facilities including Orbital Sciences Corporation in Chandler, Arizona; Raytheon in Tucson, Arizona; and Lockheed Martin in Sunnyvale, California. To ensure that MDA-generated data used in our assessment are reliable, we evaluated the agency’s management control processes. We discussed these processes with MDA senior management. In addition, we confirmed the accuracy of MDA-generated data with multiple sources within MDA and, when possible, with independent experts. To assess the validity and reliability of prime contractors’ earned value management systems and reports, we interviewed officials and analyzed audit reports prepared by the Defense Contract Audit Agency. Finally, we assessed MDA’s internal accounting and administrative management controls by reviewing MDA’s Federal Manager’s Financial Integrity Report for Fiscal Years 2003, 2004, 2005, and 2006. Our work was performed primarily at MDA headquarters in Arlington, Virginia. At this location, we met with officials from the Aegis Ballistic Missile Defense Program Office; Airborne Laser Program Office; Command, Control, Battle Management, and Communications Program Office; Multiple Kill Vehicle Program Office; MDA’s Agency Operations Office; MDA’s Office of Quality, Safety, and Mission Assurance; DOD’s office of the Director, Operational Test and Evaluation; DOD’s office of Program Analysis and Evaluation; and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. We held a teleconference with officials from DOD’s Operational Test Agency, also in Arlington, Virginia. In addition, we met with officials in Huntsville, Alabama, including officials from the Ground-based Midcourse Defense Program Office, the Terminal High Altitude Area Defense Project Office, the Kinetic Energy Interceptors Program Office, and the Defense Contract Management Agency. We conducted our review from June 2006 through March 2007 in accordance with generally accepted government auditing standards. In addition to the individual named above, Barbara Haynes, Assistant Director; LaTonya Miller; Ivy Hubler; Steven Stern; Meredith Allen; Sigrid McGinty; Tony Beckham; and Adam Vodraska made key contributions to this report. | Over the next 5 years, the Missile Defense Agency (MDA) expects to invest $49 billion in the BMD system's development and fielding. MDA's strategy is to field new capabilities in 2-year blocks. In January 2006, MDA initiated its second block--Block 2006--to protect against attacks from North Korea and the Middle East. Congress requires GAO to assess MDA's progress annually. This year's report addresses MDA's progress during fiscal year 2006 and follows up on program oversight issues and the current status of MDA's quality assurance program. GAO assessed the progress of each element being developed by MDA, examined acquisition laws applicable to major acquisition programs, and reviewed the impact of implemented quality initiatives. During fiscal year 2006, MDA fielded additional assets for the Ballistic Missile Defense System (BMDS), enhanced the capability of some assets, and realized several noteworthy testing achievements. For example, the Ground-based Midcourse Defense (GMD) element successfully conducted its first end-to-end test of one engagement scenario, the element's first successful intercept test since 2002. However, MDA will not meet its original Block 2006 cost, fielding, or performance goals because the agency has revised those goals. In March 2006, MDA: reduced its goal for fielded assets to provide funds for technical problems and new and increased operations and sustainment requirements; increased its cost goal by about $1 billion--from $19.3 to $20.3 billion; and reduced its performance goal commensurate with the reduction of assets. MDA may also reduce the scope of the block further by deferring other work until a future block because four elements incurred about $478 million in fiscal year 2006 budget overruns. With the possible exception of GMD interceptors, MDA is generally on track to meet its revised quantity goals. But the deferral of work, both into and out of Block 2006, and inconsistent reporting of costs by some BMDS elements, makes the actual cost of Block 2006 difficult to determine. In addition, GAO cannot assess whether the block will meet its revised performance goals until MDA's models and simulations are anchored by sufficient flight tests to have confidence that predictions of performance are reliable. Because MDA has not entered the Department of Defense (DOD) acquisition cycle, it is not yet required to apply certain laws intended to hold major defense acquisition programs accountable for their planned outcomes and cost, give decision makers a means to conduct oversight, and ensure some level of independent program review. MDA is more agile in its decision-making because it does not have to wait for outside reviews or obtain higher-level approvals of its goals or changes to those goals. Because MDA can revise its baseline, it has the ability to field fewer assets than planned, defer work to a future block, and increase planned cost. All of this makes it hard to reconcile cost and outcomes against original goals and to determine the value of the work accomplished. Also, using research and development funds to purchase operational assets allows costs to be spread over 2 or more years, which makes costs harder to track and commits future budgets. MDA continues to identify quality assurance weaknesses, but the agency's corrective measures are beginning to produce results. Quality deficiencies are declining as MDA implements corrective actions, such as a teaming approach, designed to restore the reliability of key suppliers. |
According to AAR, hazardous materials comprise about 8 percent of commodities shipped by rail in North America—about 2.35 million out of 29.4 million annual carloads shipped in 2015. According to the most recent Bureau of Transportation Statistics data available (2012), railroads ship about 4 percent of the hazardous materials in the United States by tonnage, but railroad shipments account for about 28 percent of distance traveled by hazardous materials. The freight railroad industry is dominated by the seven Class I railroads, which transport the majority of freight—including hazardous materials—in freight containers, portable tanks, and other types of rail cars, including tank cars that travel across a network of 140,000 miles of track. In addition, numerous Class II and hundreds of Class III railroads have essential roles in moving freight, typically linking rural communities to the larger railroad network. Often providing “first mile” and “last mile” movements, these smaller railroads, taken together, operate on 50,000 miles of track or nearly 40 percent of the national railroad network and handle in origination or destination one of every four cars moving on the national system. PHMSA, through its Office of Hazardous Materials Safety, regulates shippers and railroads transporting hazardous materials by rail and other modes. One way PHMSA fulfills this mission is through the promulgation of the HMR for the safe transport of hazardous materials. These regulations pertain to the classifying, handling, and packaging of shipments of hazardous materials, including rail shipments, and include seven requirements that emergency response information must include for each hazardous material being shipped. These are: 1. The basic description and technical name of the hazardous material. 2. Immediate hazards to health. 3. Risks of fire or explosion. 4. Immediate precautions to be taken in the event of an accident. 5. Immediate methods for handling fires. 6. Initial methods for handling spills or leaks in the absence of fire. 7. Preliminary first aid measures. The HMR also require railroads to have a document, often referred to as the train consist, that identifies basic information about the position in the train of each rail car containing hazardous materials. The consist also typically includes information on the train’s contents, including basic descriptions of the hazardous materials transported, and their destinations, and may include supplemental emergency response information, such as details on how to respond to releases of specific hazardous materials. FRA provides regulatory oversight for passenger and freight rail, issuing and enforcing safety regulations through its Office of Railroad Safety. FRA enforces the HMR and its own regulations through inspections and audits by FRA officials, including about 400 federal safety inspectors and state partners in some states. For example, according to DOT officials, FRA conducts inspections to ensure that railroads carry the required emergency response information mentioned above as well as an emergency response telephone number in train documentation and conduct and keep records of required general-awareness and function- specific hazardous material training for train crews. When a rail accident occurs, local emergency responders—police, emergency medical technicians, and firefighters—and railroad train crews are typically first on the scene of, and often provide the initial response to, a rail accident involving hazardous materials. For example, local and sometimes regional officials may be responsible for advising the public on taking shelter-in-place actions or conducting evacuations of affected populations. In addition, assuming the crews are not affected by an accident, railroad train crews are expected to provide local emergency responders with information about the position, type, and quantity of hazardous materials on the train as well as written emergency-response and contact information for the specific commodities (see fig. 1). The HMR also requires railroads to provide immediate notice of certain hazardous materials accidents to the National Response Center. The ERG, published every 4 years by PHMSA, is a 400-page document that contains emergency response information for thousands of hazardous materials and for all modes of transportation. It is intended to help first responders identify the characteristics of the hazardous materials involved in an accident through a table of markings, labels, and placards, specific risks associated with the hazardous materials how first responders can protect themselves, and procedures for containing the accident as quickly and safely as possible. The ERG is organized into four color-coded sections to help users navigate the document. For example, the orange section of the ERG divides hazardous materials into 63 categories—such as flammable liquids-toxic, flammable gases, and oxidizers —with an individual guide for each that provides information on types of potential hazards each category poses, including health, fire, or explosion hazards. The green section provides specific information, such as initial isolation and protective action distances for small or large spills occurring during the day or night, for hazardous materials that are considered to be a toxic inhalation hazard (see fig. 2). To assist railroads in complying with the HMR, AAR, with the input of railroads, develops and makes available to all subscribing railroads the United States Hazardous Materials Instructions for Rail, which provides general guidelines to the train crew on handling hazardous material shipments or incidents safely and efficiently and in accordance with local, state, and federal regulations. For example, this document provides information on required emergency response information, how the train crew and emergency responders are to interact, and what to do when a fire or vapor cloud is visible. This document also recommends that train crews carry the ERG. Selected railroads typically carry two sources of emergency response information—the train documents and the ERG—to meet the emergency response information requirements in federal regulations. Our review of selected train documents determined that they always contained the position and content of rail cars and the basic descriptions of hazardous materials on board the train. In addition, our analysis determined that the train documents sometimes included supplemental emergency response information. Fifteen of the 18 railroads we spoke with told us that they use AAR’s United States Hazardous Materials Instructions for Rail as their guidance for meeting train documentation requirements, including emergency response information. According to the train crews’ unions, the ERG and the train documents are kept in the locomotive of the train by the train crew, usually a conductor and an engineer. According to FRA and PHMSA officials, the ERG’s use is not required by regulation, but is viewed by the rail industry as a national standard for emergency response information requirements. All railroads that we interviewed told us that the ERG is carried aboard its trains as a source of emergency response information. For shipments of hazardous materials, the HMR requires the train crew to carry documents with specific information about the hazardous materials on board. Our review of selected train documents showed that the selected railroads included this information in their train documents. Additionally, most of the selected railroads included the information in their train consist, which identifies the position in the train of each rail car and includes other information about the rail car, such as its contents and destination. This information includes a basic description of each hazardous material being transported on that train, including the identification number and proper shipping name, as well as an emergency response telephone number, which is provided by the shipper of the regulated hazardous material (see fig. 3). This telephone number is required to be located on documentation carried by the train crew on the shipping documents for transportation of the material. This basic description of the hazardous material the train is transporting meets the first of seven emergency-response information requirements in the HMR. Our review revealed that some railroads also include supplemental emergency response information for each hazardous material on the train at the end of the train consist or in a separate document. According to AAR, it provides this information to some railroads from its Hazardous Materials Emergency Response Database, which AAR develops and maintains. Six of the 7 Class I railroads and 5 of the 11 selected Class II and III railroads included this supplemental information in their trains’ documents. Our analysis showed that the amount and content of the supplemental emergency response information varied depending on the number and type of hazardous materials being transported on a train. For each hazardous material on the train, the information can include 5 to 10 paragraphs, covering 1 to 2 pages of paper. Supplemental emergency response information may include information on how to handle fires; precautions to be taken in the event of an accident; first aid responses; or how to handle air, water, or land spills for that particular hazardous material (rather than groups of hazardous materials as with the ERG). AAR told us the railroads carry this information because, prior to the development of the ERG, it was the only source of emergency response information carried on trains. However, as discussed later, AAR plans to discontinue the use of this database because, among other reasons, new sources of information, along with the ERG, have become available to emergency responders that contain this type of information. According to the four emergency response associations we spoke to, when responding to a rail accident involving hazardous materials, emergency responders primarily rely on information from the train documents and the ERG during the first 30 minutes. These associations and two local responders we spoke to told us that responders will want to immediately learn what hazardous materials are on the train and their exact location. There are a couple of ways a responder might begin to identify what is on the train. According to all selected responders, if the train crew is located quickly, responders would use the train documents to identify and locate the hazardous materials on the train. If the train crew is incapacitated or cannot be found right away, responders could use placards, labels, or markings on the train to identify the hazardous materials, according to one emergency response association. According to four emergency response associations, responders then may consult the ERG to gather more information about the hazardous materials. PHMSA, the four emergency response associations, and one local responder told us that the ERG is the go-to source for first responders during the first 30 minutes or initial phase of an accident. In addition, two selected responders told us that the train documents are the best source for the most updated list of the hazardous materials on the train and their locations. According to officials from two emergency response associations, emergency response should be thought of in terms of an accident timeline. According to these officials, the goal of emergency responders is to obtain more specific and detailed information as time goes on, beyond the first 30 minutes. During the management of an accident, emergency responders move from having unknowns to knowns. According to four emergency response associations and two local responders we interviewed, as the incident timeline matures, a responder should be seeking more comprehensive sources of information on the hazardous materials involved in the incident (see fig. 4). One emergency response association told us that a responder might consult CHEMTREC, the Wireless Information System for Emergency Responders (WISER) application, the National Institute for Occupational Safety and Health (NIOSH) Pocket Guide to Chemical Hazards, or hotlines for the hazardous materials shippers themselves to obtain more specific information on chemical properties or tactical information. Below is an example of a sequence of actions a responder might take in the event of a hazardous materials accident, according to our analysis and interviews: A responder must first identify the hazardous material involved and its location. This could occur using the train documents from the train crew if they are located quickly, or if the train crew is incapacitated or cannot be found right away, using placards, labels, or markings on the train. Next, the responder determines initial response actions. Using the ERG, the responder might locate the material in the ERG and determine which of the 63 guides (orange section) applies. These guides provide the basic hazardous material information that a responder might want to know immediately, such as evacuation distances, risks of fire or explosions, potential health hazards, or protective clothing to wear. If the hazardous material is a toxic inhalation hazard, the ERG would direct a responder to its green section to gather additional information on isolation and protective action distances for small and large spills during the day or night. After locating the train crew and the train documents, a responder might also consult the supplemental emergency-response information in the train’s documents for specifics about the hazardous materials involved in the accident. The new AskRail app, developed by AAR with data from all the Class I railroads, is another tool that provides first responders immediate access to information about the hazardous materials on the train. It provides access to real-time train consist information and corresponds directly to the emergency response information in the ERG associated with each hazardous material on the train. Later, after the material has been identified and initial response actions have been taken, a responder could consult previously mentioned sources such as WISER or CHEMTREC about the reactivity of the chemical, suggested environmental response measures, or suggested first aid measures for that particular hazardous material rather than a group of hazardous materials. The interaction between the train crew and emergency responders after a rail accident is important because it is the train crew who must provide train documents to the responders that lets them know the rail car order, the contents of the rail cars, and emergency response information for any hazardous materials that the train is transporting. The United States Hazardous Materials Instructions for Rail provides guidelines for how this interaction between the train crew and responders is to occur. Each railroad may modify parts of the United States Hazardous Materials Instructions for Rail to reflect their individual policies. Our analysis of the instructions provided by selected Class I, II, and III railroads found that they had generally consistent guidance on train crew cooperation with emergency responders. In each of the instructions that we reviewed, the train crew is expected to immediately share any requested information from the train documents with emergency response personnel. In addition, the train crews are instructed to help emergency response personnel identify the rail cars and commodities involved, using train documents or observation from a safe distance. Five of the seven Class I railroads added information to their hazardous materials instructions on the process for sharing information with emergency responders. As an example, the guidance of one Class I railroad says, “If an extra copy is not available, share (DO NOT SURRENDER) the copy you have with the emergency response personnel.” One Class II railroad asks its employees to note the time, along with the name and title of the person provided with the (emergency response) information. Training received by emergency responders informs the emergency response actions taken following a rail hazardous materials accident. For example, according to one emergency response association we interviewed, emergency responders with basic training, called awareness level training, receive training on the ERG, how to read train documents, and the other sources of emergency response information that provide more specific information on hazardous materials. On the other hand, according to one emergency response association and one local responder, responders with more advanced training may consult other sources of information even in the first 30 minutes of a response. For example, a hazardous materials technician for a local responder in Montgomery County, MD, told us that he does not use the ERG, but instead relies on the NIOSH pocket guide in the first 30 minutes of an incident because it provides more precise information than the ERG. Training for emergency responders may be provided by emergency response associations, railroads, or state and local emergency management agencies, among others. According to PHMSA, PHMSA also conducts outreach to emergency responders to train them on the ERG, including any changes to the ERG if a new version is forthcoming. For example, PHMSA officials visited 46 firehouses in fiscal year 2016— including visits in Olympia, WA, Houston, TX, and Greenville, SC— to provide training on the ERG. PHMSA also developed a new online training program in April 2016 that introduces emergency responders to the hazardous materials regulations and that may also be used to meet the requirements for awareness level training, or as the basis for developing more advanced training programs. The content in the ERG and the supplemental emergency response information in the train documents we reviewed was generally similar, largely aligning with the seven categories of required emergency response information for hazardous materials shipments set forth in the HMR (see table 1). We used the seven requirements in the code of federal regulations as a baseline by which to compare the ERG and supplemental emergency response information. The specific content was the same in certain instances. For example, both sources recommended that, in response to an incident involving propane, responders move victims to fresh air and give them artificial respiration if they are not breathing. The existence of identical content was, in part, because the ERG was one of the sources for the information in the AAR Hazardous Materials Emergency Response Database, which, as discussed earlier, was the source of the supplemental emergency response information in the train documents we reviewed. While the general content and certain information in the ERG and the supplemental emergency response information we reviewed was similar, the ERG mostly provided emergency response information for groups of hazardous materials based on their general hazards, while the supplemental emergency response information was specific to each hazardous material onboard the train (see table 1). The supplemental emergency response information that we reviewed was intended to augment the ERG and provided more specificity in certain areas. For example, for an incident involving chlorine, the supplemental emergency response information we reviewed recommended digging a pit, pond, lagoon, or holding area to contain the spill, while the ERG generally recommended preventing entry of the spill into waterways, sewers, basements, or confined areas, but did not offer a specific means by which to do so. Additionally, for an incident involving sodium hydroxide solution— which is commonly present in commercial drain and oven cleaners— the supplemental emergency response information we reviewed recommended the use of specific materials for protective clothing, such as butyl rubber and neoprene, while the ERG generally recommended wearing protective clothing recommended by the manufacturer of the hazardous material. The supplemental emergency response information we reviewed also provided information that went beyond what is required by federal regulations and included in the ERG, such as physical characteristics of the hazardous material, uses, water solubility, and environmental hazards. However, in one area, the ERG often provided more detail than the supplemental emergency response information we reviewed. The ERG provided specific initial isolation and evacuation distances for incidents involving groups of hazardous materials, while the supplemental information offered no distance recommendations or deferred to the ERG in most of the train documents we reviewed. For example, for a large spill of gasoline, the ERG recommended an initial downwind evacuation of at least 300 meters, while the supplemental emergency response information we reviewed said to consult the ERG for all evacuation distances. The ERG and the supplemental emergency response information we reviewed also at times differed on the type of information that was given for certain emergency response recommendations. For example, for handling spills or leaks, the ERG often provided recommendations according to the size of the incident, such as small or large spills, while the supplemental information we reviewed often provided recommendations according to the environment, such as air, land, or water spills. In reviewing the ERG and train documents in our nonprobability sample belonging to our selected Class I, II, and III railroads, we found inconsistent information for 8 of the 72 hazardous materials we selected. Two inconsistencies involved differences in a first aid response recommendation regarding the amount of time to flush skin or eyes with running water in case of contact with the substance. The differences were 5 minutes in one instance and 10 minutes in another. Six inconsistencies involved discrepancies between the recommended evacuation distances in the two sources. o For four of the six hazardous materials with evacuation distance inconsistencies, the supplemental emergency response information recommended an evacuation distance of a half mile for an incident involving fire, while the ERG recommended one mile. These four hazardous materials were all labeled as United Nations identification number 1075, which represents multiple liquefied petroleum gases. o For another of the hazardous materials, sodium chlorate, the ERG recommended an evacuation distance of a half mile, while the supplemental emergency response information recommended the same distance, but only if the resulting fire was uncontrollable. o For the final hazardous material, ammonium nitrate, the ERG recommended an evacuation distance of a half mile in all directions for an incident involving fire, while the supplemental emergency response information recommended one mile for an uncontrollable fire. The NTSB report on the Paulsboro, New Jersey, incident highlighted inconsistencies between recommended evacuation distances in the supplemental emergency response information in the train documents and the ERG for two of the hazardous materials on the train, chlorine and vinyl chloride. This finding led to NTSB’s recommendation that AAR update its database to ensure that its guidance is consistent with and at least as protective as the ERG. In response to NTSB’s recommendation, AAR replaced all existing evacuation distance statements in its Hazardous Materials Emergency Response Database, effective August 1, 2014, with a statement to consult the ERG for protective action considerations, including initial isolation or evacuation distances and shelter-in-place recommendations. Additionally, AAR made other changes to the database effective December 1, 2014. NTSB found AAR’s response actions to be unacceptable because AAR did not revise emergency response information that was less conservative than the equivalent precautions contained in the ERG. As described above, our analysis showed that some railroads did not capture these changes, including the new evacuation distance recommendations, and continued to provide specific evacuation distances for hazardous materials in their train documents that were inconsistent with the ERG. AAR is planning a change that is intended to remove the potential for such discrepancies. Specifically, according to AAR, AAR hazardous materials committee members—which consist of representatives of the 7 Class I railroads—unanimously voted in August 2016 to discontinue the support, production, and distribution of the AAR Hazardous Materials Emergency Response Database. According to AAR, although the Class II and III railroads did not vote on the change, the database will no longer be supported, produced, or distributed, making it effective for all railroads. Since emergency responders have access to the ERG and other resources with more specific information than the ERG—such as WISER, the NIOSH pocket guide, safety data sheets, CHEMTREC, and the shipper—the supplemental emergency response information has become obsolete, according to one AAR official. We provided a draft of this report to DOT and NTSB for their review and comment. DOT provided a technical comment about hazardous material training requirements, which we incorporated. NTSB provided a technical comment about AAR’s response to their recommendation to revise discrepancies between emergency response information found in AAR’s database and the ERG, which we incorporated. We will send copies of this report to the appropriate congressional committees and to the Secretary of Transportation and the Chairman of the National Transportation Safety Board. In addition, the report will be available at no charge on the GAO website at http://gao.gov. If you or your staff have any questions about this report, please contact Susan Fleming at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. Our objectives were to examine: (1) what emergency response information is carried on trains by selected railroads that transport hazardous materials and how responders use it and (2) how the supplemental emergency response information carried on trains of these railroads compares to the information in the Emergency Response Guidebook (ERG). To inform both of our objectives, we reviewed relevant literature, including the National Transportation Safety Board (NTSB) report on the Paulsboro, New Jersey incident, the ERG, and a prior GAO report on emergency response to rail incidents. We reviewed the Association of American Railroads’ (AAR) United States Hazardous Materials Instructions for Rail, to understand industry guidelines on how to meet federal hazardous material regulations, including the emergency response information to be carried on trains and how railroad personnel are expected to interact with first responders. We also examined relevant sections of the Hazardous Materials Regulations (HMR) to determine requirements for railroads related to emergency response information carried on trains transporting hazardous materials. Additionally, we interviewed officials from the Pipeline and Hazardous Materials Safety Administration (PHMSA) and the Federal Railroad Administration (FRA) within the Department of Transportation (DOT) and NTSB to understand their roles in developing, regulating, and making recommendations regarding emergency response information on trains. To identify what emergency response information is carried on trains by selected railroads that transport hazardous materials and how responders use it, we interviewed two railroad associations, AAR and the American Short Line and Regional Railroad Association (ASLRRA), and all seven Class I railroads. ASLRRA told us it represents approximately 450 of the about 550 Class II and III railroads, of which 300 to 400 transport hazardous materials. We selected seven Class II and seven Class III railroads that carry hazardous materials and are ASLRRA members, using PHMSA’s Office of Hazardous Materials Safety Incident Reports Database and a member list provided by ASLRRA. We searched PHMSA’s database for railroads that experienced an incident in transit during 2015, which resulted in a list of railroads that carry hazardous materials that we could cross-reference with the ASLRRA member list. The railroads we selected are also geographically distributed across the United States. Of those fourteen selected Class II and III railroads, we spoke with representatives of five Class II and six Class III railroads. The other three did not respond to our requests for an interview. The results of the interviews cannot be generalized to the entire population of Class II and Class III railroads. As described below, we also reviewed information on hazardous materials in selected train documents from some of these railroads. Fifteen (7 Class Is, 4 Class IIs, and 4 Class IIIs) of the 18 railroads we interviewed provided us with at least one set of train documents. We selected one set of train documents from each of those railroads to determine how the basic description and technical name of hazardous materials the train is transporting and the emergency response telephone number are displayed. The results of the analysis are not generalizable to all of the train documents of the selected railroads or all railroads. We also spoke with CHEMTREC regarding its role in providing information to first responders. Additionally, we interviewed three of the five largest shippers of hazardous materials in the United States—Exxon- Mobil, Dow Chemical Company, and BASF—to determine what emergency response information they provide to railroads and emergency responders. CHEMTREC identified and provided the contact information for the three shippers based on their criteria and resources. We also interviewed local emergency responders in Montgomery County, MD, Westmoreland County, PA, and Culbertson, MT to learn their perspective on the emergency response information carried on trains and how and when emergency responders use the information. We chose the first organization because of its proximity to the audit team in Washington, D.C. and the other two because of their involvement in responding to rail incidents involving hazardous materials in 2014 and 2015, respectively. We determined their involvement by searching PHMSA’s Incident Reports Database for serious rail incidents in transit over the last 5 years that resulted in a hazardous materials’ release and talking to the first responders associated with the city or county listed in the database. We also interviewed representatives from four emergency response associations—three national associations representing local emergency responders, including the International Association of Fire Chiefs, International Association of Firefighters, and the National Volunteer Fire Council, as well as the National Fire Protection Association, which develops, among other things, standards for emergency response to hazardous materials incidents. Additionally, we spoke with two train crew unions—the Brotherhood of Locomotive Engineers and Trainmen and the International Association of Sheet Metal, Air, Rail and Transportation Workers—to understand the role of the train crews in emergency response and how they interact with emergency responders following a rail incident. To understand how the supplemental emergency-response information carried on trains of selected railroads compares to the information in the ERG, we analyzed information on a nonprobability sample of hazardous materials discussed in the ERG and in train documents. We used the 2012 ERG, as opposed to the recently released 2016 ERG, because not all of selected railroads had begun using the 2016 version during the timeframe that the sample was taken. We asked each of the 18 railroads that we interviewed to provide us with a nonprobability sample of their train documents, including the “train consist” and any supplemental emergency response information, for 15 trains carrying at least two different hazardous materials and traveling between May 12, 2016, and June 30, 2016. Eleven of the 18 railroads (6 Class Is, 2 Class IIs, and 3 Class IIIs) provided train documents that contained supplemental emergency response information. From those train documents, we selected a sample of 72 unique hazardous materials that were in either the AAR’s top 125 hazardous commodities list as measured by loaded tank car originations or top 25 hazardous commodities list as measured by loaded non-tank-car originations (e.g., intermodal trailers or containers on flat cars) in 2014. The sample represented 70 unique sets of train documents from 10 of the 11 railroads that carried the supplemental emergency response information. The sample included 10 hazardous materials in 10 sets of train documents from five of the six Class I railroads, 10 in 8 sets of train documents from the sixth Class I railroad, 5 in 5 sets of train documents from both Class II railroads, and 1 in 1 set of train documents from two of the three Class III railroads. To make the comparisons, we first determined which parts of the ERG and the supplemental emergency response information in the train documents contained information that is associated with the seven requirements for emergency response information outlined in the HMR. We then examined each hazardous material in the sample by comparing the relevant sections in each source and determining where there are similarities and differences, as well as any conflicting information. The results of our analysis are not generalizable to all train documents or all hazardous materials in the ERG. Furthermore, AAR provided us access to its Hazardous Materials Emergency Response Database. We determined that the supplemental emergency response information associated with the sample of hazardous materials in the reviewed train documents from the 10 railroads was reliable for the purposes of our report and objectives because all of the information came from the AAR Hazardous Materials Emergency Response Database. We determined that the database was the appropriate source of the information in the reviewed train documents through interviews with officials from each of the 10 railroads. We compared the supplemental emergency response information on the sample of hazardous materials in the reviewed train documents to the information on those hazardous materials in the source database for the time period reflected by the dates of the train documents (May 12, 2016, to June 30, 2016). The results of our analysis are not generalizable to all train documents or all hazardous materials in the AAR Hazardous Materials Emergency Response Database. We also interviewed supply chain software providers ShipXpress and GE Transportation to learn how the Class II and Class III railroads receive access to the AAR Hazardous Materials Emergency Response Database. We conducted this performance audit from March 2016 through December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual name above, Nancy Lueke (Assistant Director), Kieran McCarthy (Analyst in Charge), Moira Lenox, Garrett Riba, Josh Ormond, William Egar, Dave Hooper, Delwen Jones, Reuben Montes de Oca, and Kelly Rubin made key contributions to this report. | In November 2012, a train derailed in Paulsboro, New Jersey, releasing about 20,000 gallons of vinyl chloride, a hazardous material. The National Transportation Safety Board (NTSB) found, among other issues, that the supplemental information in the train's documents on responding to emergencies involving vinyl chloride was inconsistent with and less protective than emergency response guidance in the ERG . Congress included a provision in statute for GAO to evaluate the differences between the emergency response information carried by trains transporting hazardous materials and the ERG guidance. This report examines (1) what emergency response information is carried on trains by selected railroads transporting hazardous materials and how responders use it, and (2) how selected railroads' supplemental emergency response information compares to information in the ERG . GAO reviewed the ERG and other relevant literature and met with DOT and NTSB officials, among others. GAO interviewed all 7 larger Class I railroads and 11 smaller Class II and III railroads that carried hazardous materials in 2015. GAO compared the supplemental emergency response information with ERG information for 72 frequently shipped hazardous materials from a nonprobability sample of train documents provided by 10 of the 18 selected railroads. To help emergency responders safely handle rail accidents involving hazardous materials, selected railroads transporting hazardous materials typically carry two sources of information: the Department of Transportation's (DOT) Emergency Response Guidebook ( ERG ) and information in the trains' documents. Federal Hazardous Material Regulations require railroads and other hazardous material transporters to carry emergency response information that describes immediate hazards to health and risks of fire or explosion, among other things. Representatives from all 18 railroads GAO interviewed told us that they carry the ERG on their trains. According to DOT officials, the ERG's use is not required by regulation, but the rail industry views it as a national standard for emergency response information. Our review of selected train documents showed that they always have a basic description of each hazardous material being transported, including the identification number and proper shipping name, as well as an emergency response telephone number. Six of the 7 Class I railroads and 5 of the 11 selected Class II and III railroads also included emergency response information in these documents. According to four emergency response associations, in the first 30 minutes after a rail incident, emergency responders primarily use the train documents to locate and identify hazardous materials and use the ERG to identify potential response actions. ERG differed from the supplemental emergency response information which is provided by the Association of American Railroads' (AAR) Hazardous Materials Emergency Response Database. AAR decided in August 2016 to discontinue the database, removing the potential for discrepancies between the ERG and the supplemental emergency response information from AAR going forward. GAO is not making recommendations. DOT and NTSB provided technical comments, which GAO incorporated. |
The United States is currently undergoing a transition from analog to digital broadcast television, often referred to as the DTV transition. The transition will enable the government to allocate valuable spectrum from analog broadcast to public safety and other purposes. Further, digital transmission of television signals provides several advantages compared to analog transmission, such as enabling better quality picture and sound reception as well as using the radiofrequency spectrum more efficiently than analog transmission. With traditional analog technology, pictures and sounds are converted into “waveform” electrical signals for transmission through the radiofrequency spectrum, while digital technology converts these pictures and sounds into a stream of digits consisting of zeros and ones for transmission. The Digital Television Transition and Public Safety Act of 2005 addresses the responsibilities of two federal agencies—FCC and NTIA—related to the DTV transition. The act directs FCC to require full-power television stations to cease analog broadcasting and to broadcast solely digital transmissions after February 17, 2009. As we have previously reported, households with analog televisions that rely solely on over-the-air television signals received through a rooftop antenna or indoor antenna must take action to be able to view digital broadcast signals after the termination of analog broadcasts. Options available to these households include (1) purchasing a digital television set that includes a tuner capable of receiving, processing, and displaying a digital signal; (2) purchasing a digital-to-analog converter box, which converts the digital broadcast signals to analog so they can be viewed on an existing analog set; or (3) subscribing to a cable, satellite, or other service to eliminate the need to acquire a digital-to-analog converter box. The act also directed NTIA to establish a $1.5 billion subsidy program through which households can obtain coupons toward the purchase of digital-to-analog converter boxes. The last day for consumers to request coupons is March 31, 2009, and coupons can be redeemed through July 9, 2009. As required by law, all coupons expire 90 days after issuance. Consumers can redeem their coupons at participating retailers (both “brick and mortar” and online) for eligible converter boxes. To help inform consumers about the transition, eight private sector organizations launched the DTV Transition Coalition in February 2007. These eight organizations are the Association for Maximum Service Television, Association of Public Television Stations, Consumer Electronics Association, Consumer Electronic Retailers Coalition, Leadership Conference on Civil Rights, LG Electronics, National Association of Broadcasters, and the National Cable and Telecommunications Association. These founding organizations comprise the Coalition’s steering committee and make decisions on behalf of the Coalition. To better represent the interests of at-risk or underserved populations—such as the elderly—AARP later joined the steering committee. The Coalition’s mission is to ensure that no consumer is left without broadcast television due to a lack of information about the transition. Currently, the Coalition has over 160 member organizations comprised of business, trade and industry groups, as well as FCC. Recent surveys conducted by industry trade associations indicate that consumer awareness of the digital transition is low. The Association for Public Television Stations reported in January 2007 that 61 percent of participants surveyed had “no idea” that the transition was taking place. Another study conducted by the National Association of Broadcasters focused on households that primarily receive their television signals over- the-air—and will therefore be most affected by the transition—and reported that 57 percent of those surveyed were not aware of the transition. Both surveys found that most people with some awareness of the transition had limited awareness of the date the transition will take place. Federal and private stakeholders are making progress in educating consumers about the DTV transition, with both independent and coordinated efforts underway. FCC and NTIA have been involved in consumer education and awareness programs and some private sector organizations are voluntarily taking the lead on outreach efforts. FCC has taken several steps toward educating consumers about the transition. For example, FCC has launched a Web site (DTV.gov), which, among other things, provides background information on the DTV transition and answers common consumer questions. In addition, FCC has met with some industry groups, consumer groups, and other government agencies and participated in public events intended to educate audiences about the transition. Moreover, in April 2007, FCC adopted a rule requiring all sellers of television-receiving equipment that does not include a digital tuner to prominently display a consumer alert that such devices will require a converter box to receive over-the-air broadcast television after February 17, 2009. To ensure that retailers are in compliance, FCC staff have inspected over 1,000 retail stores and Web sites and issued over 250 citations with potential fines exceeding $3 million. In addition, FCC has issued notices to television manufacturers with potential fines over $2.5 million for importing televisions without digital tuners. In June 2007, FCC announced that it had re-chartered an intergovernmental advisory committee comprised of 15 representatives from local, state, and tribal governments to help it address, among other things, consumer education about the DTV transition. Similarly, it re-chartered a consumer advisory committee that will also make recommendations to FCC about the transition on behalf of consumers, with specific representation for people with disabilities and other underserved or at-risk populations. NTIA has also taken initial steps towards educating consumers about the transition. NTIA has statutory responsibility for the converter box subsidy program, for which Congress appropriated up to $5 million for education efforts. According to NTIA, its education efforts are focused on the subsidy program and more specifically on five groups most likely to lose all television service as a result of the transition: (1) senior citizens, (2) the economically disadvantaged, (3) rural residents, (4) people with disabilities, and (5) minorities. According to NTIA, it has begun outreach efforts to these groups through partnerships with private organizations as well as other federal agencies. Also, it has created “information sheets” for consumers, retailers, and manufacturers that outline the subsidy program and are available on its Web site. NTIA said it has provided informational brochures in English and Spanish to the public and provided a copy to every member of Congress and federal agencies that serve some of the populations noted above. The agency also created a consumer hotline that provides information about the transition in English and Spanish, and TTY numbers that provide information in English and Spanish to the hearing impaired. In addition, in August 2007, NTIA contracted with IBM to implement the broad consumer education component about the program. On a voluntary basis, some private stakeholders have begun implementing measures to inform consumers about the DTV transition. As previously mentioned, one such private-sector led effort is the DTV Transition Coalition, which has developed and consumer tested various messages about the transition, using surveys and focus groups of the affected consumers—the general population, senior citizens, minority groups, and over-the-air analog television households—to understand what messages are most effective in informing them about the transition. Subsequently, the Coalition said it agreed upon one concise message that includes information about the transition itself, the rationale for the transition, and the ways consumers can effectively switch to DTV. In particular, the Coalition suggests consumers can prepare for the transition by purchasing a DTV converter box, purchasing a new television set with a built in digital tuner, or subscribing to a pay television service such as cable, satellite, or telephone company video service provider. The Coalition said its member organizations will distribute this information to their constituents, including senior citizens, the disabled, and minority groups. The Coalition message will also be delivered to media outlets. In addition to coordinated efforts within the Coalition, private sector organizations also have independent education efforts underway. For example, a number of industry associations host Web sites that inform consumers of, among other things, common consumer questions about the transition, how to check if the television they own is digital-ready, and how to dispose of analog television sets. One national retailer told us that it added a feature to its registers so that when a consumer purchases an analog television, a message about the transition is printed on the bottom of the receipt. Widespread and comprehensive consumer education efforts have yet to be implemented, but additional efforts are currently being planned. FCC, NTIA, and private sector stakeholders have plans to further educate consumers as the digital transition nears. The converter box subsidy program, to be administered by NTIA, will also have a consumer education component implemented by its contractor, IBM. Because many education efforts are in the planning or initial stages of implementation, it is too early to tell how effective these efforts will be. FCC has solicited input on proposed consumer education programs. In August 2007, in response to a letter containing proposals on advancing consumer education submitted by members of Congress, FCC released a notice of proposed rulemaking soliciting public comments. These proposals include requiring television broadcasters to conduct on-air consumer education efforts and regularly report on the status of these efforts, requiring cable and satellite providers to insert periodic notices in customers’ bills about the transition and their future viewing options, and requiring manufacturers to include information on the transition with any television set or related device they import or distribute in the United States. Each of the requirements mentions civil penalties for noncompliance. Another proposal on which FCC sought comment would have FCC work with NTIA to require that retailers participating in the converter box subsidy program detail their employee training and consumer information plans, as well as have FCC staff spot check the retailers for compliance. Also, FCC sought comments on a proposal requiring partners identified on FCC’s DTV.gov Web site to report their specific consumer outreach efforts. The comment period on the notice of proposed rulemaking is scheduled to close on September 19, 2007; the period to file any rebuttal closes October 1, 2007. NTIA also has not fully implemented education efforts about its subsidy program in large part because it is contracting out the consumer education component of its program. The contract was recently awarded in the middle of August 2007 to IBM and plans are in the development stage. Many private sector consumer education efforts are in the planning stages and have yet to be fully implemented. Representatives from private sector organizations told us there are several reasons why they are waiting to fully launch their consumer education campaigns. In particular, some said they are trying to time their education efforts for maximum effectiveness and that they do not want to start too early and possibly lose the attention of consumers later on. Another reason is that they are waiting for key events to occur, such as the availability of converter boxes in retail stores, so that education efforts can contain complete information. A number of nonprofit organizations told us that a lack of dedicated funding hampers their ability to educate and outreach to their constituents. Through its many member organizations, the DTV Transition Coalition intends to disseminate information about the transition in a variety of formats, including through presenting at conferences, creating media attention, and distributing informational materials to Congressional offices. The National Cable and Telecommunications Association has created public service announcements about the transition in both Spanish and English, which will be aired by cable operators and networks in markets throughout the country in the fall of 2007. The National Association of Broadcasters also has plans to launch a public service announcement campaign related to the transition by the end of 2007, which will air on its local television broadcasting affiliates, independent stations, and broadcast networks. Despite efforts currently underway and those being planned, difficulties remain in the implementation of consumer education programs. Private sector organizations are participating in outreach efforts, but these actions are voluntary and therefore the government cannot be assured of the extent of private sector efforts. Moreover, given the different interests represented by industry stakeholders, messages directed at consumers vary and might lead to confusion. For example, in addition to providing information about why the transition is occurring, some industry stakeholders have incentives to provide consumers with information on a wide host of technology equipment or services that consumers could purchase, at varying costs. Advocates for the elderly, disabled, and non- English speaking households told us that they are concerned that their members will become confused by the options and end up purchasing equipment they do not need or more expensive equipment than necessary to maintain their television viewing. Further, we heard from strategic communication experts from industry, government, and academia that potential challenges might obstruct consumer education efforts. In particular, the experts and others highlighted several challenges: Prioritizing limited resources. With limited time and financial resources, it is likely to be a challenge for stakeholders to determine how best to allocate those resources within the campaign—for example, whether to target a smaller audience over a set period of time, versus targeting a broader audience over a shorter period of time. This is applicable because, according to industry stakeholders, there may be specific groups that are more vulnerable than others to losing television service. Educating consumers who do not necessarily need to take action. Many of the outreach efforts will be focused on educating consumers on what to do to keep their television sets from going dark after the termination of analog broadcasts. However, a large proportion of U.S. households will not need to do anything—for example, because they have cable or satellite television service that will enable their analog set to continue to display programming. Because many messages focus on the actions that households that rely on over-the-air analog broadcasting need to take, consumers unaffected by the transition might become confused and purchase equipment they do not need. In our past work looking at a similar digital transition in Germany, we have described this potential confusion to cable and satellite households as a challenge of educating consumers about the transition. Reaching underserved populations. Conveying the message to underserved populations, such as senior citizens, the disabled, those residing in rural areas, or non-English speaking households, will provide an added challenge. Many groups reaching out to consumers about the transition are doing so on Web sites, which may not be available to people who lack Internet access or are less technically savvy. Another challenge is providing information in a wide variety of formats, such as in different languages for non-English speaking consumers and in text, video, voice, and Braille for the disabled. Overall, a challenge of consumer education is that those households in need of taking action may be the least likely to be aware of the transition. Aligning stakeholders. Industry representatives also noted the challenge of aligning stakeholders—some of whom are natural competitors—to work together. In our past work, we have reported that federal agencies engaged in collaborative efforts—such as the transition—need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. Reporting on these activities can help key decision makers within the agencies, as well as clients and stakeholders, to obtain feedback for improving both policy and operational effectiveness. Some progress in aligning stakeholders, such as the formation of the DTV Transition Coalition, has been made, but some stakeholders may have competing interests. For example, recent announcements produced by the National Cable and Telecommunications Association invoke the DTV transition, but ultimately promote the role of cable television in the transition. In our ongoing work for the House Energy and Commerce committee and this committee, we plan to assess the progress of consumer education and awareness about the DTV transition. We will continue to monitor consumer education programs and plan to conduct a series of consumer surveys throughout the year prior to the transition date. These surveys will be aimed at determining the population that will be affected by the DTV transition and the public awareness of the transition. In determining the affected population, we will look at the percent of the population relying on over-the-air broadcasts for their primary television, as well as the percent of the population with non-primary televisions being used to watch over-the-air television. Additionally, we will review the demographic characteristics of the affected population to determine what groups might be most disrupted by the transition. We will survey for public awareness of the DTV transition, and specific knowledge of the transition, such as when the transition will take place. We will seek to determine the level of public awareness of those who will be affected by the transition and awareness of the converter box subsidy program and other options for viewing digital signals after the transition. We plan to report on changes in consumer awareness over time by conducting surveys throughout the transition process. Furthermore, we will continue to assess government and industry consumer education efforts and will analyze the efforts compared with key practices for consumer outreach. We will review the government’s responsibility for consumer education, monitor the outcome of FCC’s notices of proposed rulemaking regarding the transition, and collect details on IBM’s consumer education plan as they become available. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For questions regarding this testimony, please contact Mark L. Goldstein on (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony included Matthew Cail, Colin Fallon, Simon Galed, Bert Japikse, Crystal Jones, Sally Moino, Andrew Stavisky, and Margaret Vo. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | On February 17, 2009, federal law requires all full-power television stations in the United States to cease analog broadcasting and broadcast digital-only transmissions, often referred to as the digital television (DTV) transition. Federal law also requires the National Telecommunications and Information Administration (NTIA) to create a program that subsidizes consumers' purchases of digital-to-analog converter boxes. After the transition, households with analog sets that rely on over-the-air broadcast signals must take action or they will lose television service, but some households might not be aware of this potential disruption. This testimony provides preliminary information on (1) the consumer education efforts currently underway, (2) education efforts being planned, (3) difficulties with the implementation of consumer education programs, and (4) ongoing GAO work on consumer education and awareness regarding the transition. GAO interviewed officials with the Federal Communications Commission (FCC) and NTIA. Further, GAO met with a wide variety of industry and other stakeholders involved with the transition, including members of the DTV Transition Coalition--a group of public and private stakeholders, and experts on strategic communications. GAO discussed this testimony with FCC and NTIA officials and incorporated their comments. A number of federal and private stakeholders have begun consumer education campaigns, with both independent and coordinated efforts underway. FCC has taken several steps to promote consumer awareness, such as launching a Web site, participating in events intended to educate the public, and requiring sellers of televisions to include consumer alerts on non-digital televisions. NTIA has created brochures in English and Spanish to provide the public information about its converter box subsidy program and is partnering with organizations to perform outreach to disadvantaged groups. Earlier this year, the DTV Transition Coalition was launched to help ensure that no consumer is left without broadcast television due to a lack of information. Over 160 private, public, and non-profit groups have joined the Coalition to coordinate consumer education efforts. While widespread and comprehensive consumer education efforts have yet to be implemented, various efforts are currently being planned. FCC, NTIA, and private sector stakeholders have plans to further educate consumers as the DTV transition nears. For example, voluntary public service announcements to raise awareness of the transition are planned by industry groups and FCC is considering requiring broadcasters, manufacturers and cable and satellite providers to insert various messages and alerts in their products and programming. In addition, the converter box subsidy program will have a consumer education component. Because many education efforts are in the planning or early stages of implementation, it is too early to tell how effective these efforts will be. Various factors make consumer education difficult. While private sector stakeholders are participating in outreach efforts, these actions are voluntary and therefore the government cannot be assured of the extent of private sector efforts. Strategic communications experts from industry, government, and academia identified potential challenges to a consumer education campaign, including (1) prioritizing limited resources to target the right audience, (2) educating consumers to help protect them from making unnecessary purchases, (3) reaching underserved populations, and (4) aligning stakeholders to form a consistent, coordinated effort. GAO has work planned to assess the progress of consumer awareness. In particular, GAO plans to conduct a series of surveys to determine the population affected by the DTV transition, levels of awareness about the transition, and demographic information about the affected population. Throughout the transition, GAO will continue to monitor government and industry education efforts and analyze these efforts relative to best practices for consumer education campaigns. GAO plans to review the government's responsibility for consumer education, monitor the outcome of FCC's rulemaking related to consumer education, and collect details of the consumer education component of the converter box subsidy program. |
In carrying out LSC’s mission, local legal-service providers (the grant recipients) employ staff attorneys to assist eligible clients in resolving their civil legal problems, often through advice and referral. According to LSC, in a typical year the largest portion of total cases (38 percent) concern family matters, followed by housing issues (24 percent), income maintenance (13 percent), and consumer finance (12 percent). LSC reported that most cases are resolved out of court. In 2007, LSC reported that three out of four clients were women, most of them mothers. To be eligible, clients must meet certain requirements. First, individual applicants for legal assistance supported by LSC funds must meet financial eligibility requirements. LSC has statutory authority to assist only “eligible clients,” which are defined as “any person financially unable to afford legal assistance.” LSC’s regulations include additional criteria to help determine whether a potential client is eligible for assistance from LSC. These regulations require that organizations receiving LSC grants adopt financial eligibility policies within the income limits set by LSC, which is at or below 125 percent of the current Federal Poverty Guidelines amounts— an income of approximately $25,000 for a family of four. Second, there are also legal restrictions on access to LSC-supported legal assistance by aliens. The LSC Act prohibits LSC personnel and grant recipients or their employees from engaging in certain prohibited activities, such as providing legal assistance with respect to any fee-generating case, providing legal assistance related to a criminal proceeding, supporting or conducting training programs for the purpose of advocating particular public policies or encouraging political activities, providing legal assistance in civil actions to persons who have been convicted of a criminal charge, or participating in litigation related to an abortion. In addition, LSC cannot provide funds for legal services for a proceeding related to a violation of the Military Selective Service Act. The LSC Board of Directors, which is charged with managing the affairs of the corporation, is responsible for ensuring compliance with these restrictions. The LSC Act established the LSC Board and specified that the board members shall annually select a Chairman and appoint an LSC President. The D.C. Nonprofit Corporation Act, which generally applies to LSC as a D.C. nonprofit corporation, provides that the affairs of the corporation shall be managed by the board of directors and permits the board of directors to delegate some of the authority to perform management duties to corporate officers. Our recently issued report, Legal Services Corporation: Governance and Accountability Practices Need to Be Modernized and Strengthened, discusses LSC, its unique status, and the rigorous controls necessary to protect the heavily federally funded entity. As an independent office within LSC, the LSC OIG is authorized to carry out audits and investigations of LSC programs and operations, recommend policies to improve program administration and operations, and keep the LSC board and Congress fully and currently informed about problems in program administration and operations and the need for and progress of corrective action. Also, LSC is subject to congressional oversight through the annual appropriations process as well as responding to congressional inquiries and participating in hearings. As shown in figure 1, since 1991 LSC’s annual federal funding has ranged from a high of $401.6 million in 1995 to a low of $279.1 million in 1996, with recent years’ appropriations (which makeup most of the federal funding) remaining fairly consistent at around $330 million. In the appropriation for LSC, Congress regularly designates a specific amount for the OIG. For example, the resulting allocations for the OIG were about $2.97 million in fiscal year 2007 and about $2.51 million in fiscal year 2006. LSC uses the majority of its funding to provide grants to local legal-service providers. Most of LSC’s approximately $330 million in annual federal funding of recent years has been designated for grants. Funds are distributed based on the number of low-income persons living within a service area, and some grantees maintain several offices within their service area. Beginning in 1996, the administrative provisions included each year in the acts making appropriations to LSC have required that grants be awarded through a system of competition and that LSC management issue regulations to implement this requirement. According to LSC management, one purpose of the competitive grants process is to encourage the economical and effective delivery of assistance to eligible clients. This represented a major change in the legal-services delivery system, eliminating the automatic renewal of funding as permitted by the LSC Act and practiced by LSC. After a final decision has been issued by LSC management terminating financial assistance to a recipient in whole for any service area, LSC management is required to implement a new competitive bidding process for the affected service area pursuant to implementing regulations. We found weaknesses in LSC’s internal controls that negatively affected LSC’s ability to monitor and oversee grants and left grant funds vulnerable to misuse. We also found poor fiscal practices and improper or potentially improper expenditures at grantees we visited. LSC’s control environment contains several weaknesses, including the lack of clearly defined roles and responsibility among the three different organizational units providing for oversight of grantees—OPP, OCE, and the OIG. In addition, OIG and OCE’s shared authority to oversee grantee financial internal controls and fiscal compliance has resulted in confusion about responsibility for grantee financial oversight. Poor communication and coordination between the oversight offices further impedes LSC’s ability to effectively oversee grantees. Furthermore, LSC’s control activities for monitoring grantee fiscal compliance are limited in scope and do not result in timely feedback to grantees. In addition, LSC does not utilize a structured or systematic approach for assessing risk across its 138 grantees when determining the timing and scope of its grantee oversight visits. LSC management monitors grantees through site visits and reviews conducted by two offices: the Office of Program Performance (OPP) and the Office of Compliance and Enforcement (OCE). OPP is responsible for designing and administering the competitive grant process and program evaluation. OCE is responsible for grantee compliance with the LSC Act and other laws, regulations, instructions, guidelines, and grant requirements. In addition, OCE and the OIG share responsibility for overseeing grantee financial controls and compliance. The current roles and division of responsibilities between the OIG and OCE for oversight of grantee financial controls and compliance are not clearly defined or communicated to the two offices. We also found that communication and coordination of grantee site visits between OCE and OPP need improvement in order to achieve effective oversight and avoid gaps and duplication in oversight. Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. Another factor affecting an entity’s control environment is the entity’s organizational structure. It provides management’s framework for planning, directing, and controlling operations to achieve agency objectives. A good internal control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting. In 1988 Congress subjected LSC to the Inspector General Act of 1978, as amended (IG Act).The IG Act provides that each designated federal entity, in this case LSC, shall transfer to the OIG “the offices, units, or other components, and the functions, powers, or duties thereof, that such head determines are properly related to the functions of the Office of Inspector General.” For example, the IG Act transferred to the Inspector General responsibility for providing policy direction for, and conducting, supervising, and coordinating audits of entity programs, such as LSC’s legal assistance grants program. Further, in April 1996, Congress enacted the appropriations act funding LSC for fiscal year 1996 (1996 Act) and included a number of administrative provisions supplementing the LSC Act requirements, including those related to grantee audits (§ 509). The 1996 Act clarified that the grantees are responsible for contracting for audits with independent public accountants (IPA), the OIG is responsible for overseeing the quality and integrity of the audit process, and LSC is responsible for resolving deficiencies and noncompliance identified in the audits and sanctioning grantees for unacceptable audits. Under the 1996 Act, IPAs follow OIG guidance and generally accepted government auditing standards in conducting their audits. These audits include an independent auditor’s opinion about whether the financial statements are fairly presented in accordance with generally accepted accounting principles, along with auditors’ reports on internal control and compliance. The 1996 Act also authorizes the OIG to conduct additional on-site monitoring, audits, and inspections. If the OIG reports to LSC management that a grantee IPA found significant reportable conditions, findings, or recommendations, then the 1996 Act provides that LSC is responsible for ensuring that these are timely resolved, including performing appropriate follow-up. In the event that the OIG were to determine that a grantee’s IPA audit were unacceptable, then the 1996 Act authorizes LSC, consistent with OIG recommendations, to sanction the grantee by withholding some or all its funding until the grantee completes an acceptable audit. Thus, the OIG plays an important role in LSC grantee oversight. OPP is specifically responsible for designing and administering the competitive grants process. In addition, OPP is responsible for (1) program evaluation and supportive follow-up; (2) developing strategies to improve program quality, including identifying areas of grantee weaknesses and following up with individual recipients; (3) promoting enhanced technology to improve client community access to services; and (4) encouraging “best practices” through the legal resource Web site, specialized help with intake and rural area delivery, and pilot projects such as loan repayment and mentoring. OPP also performs grantee program site visits. OPP’s staff totals 22 members, comprised of a Director, a Deputy Director, a senior program counsel, eight program counsels, seven program analysts, one grants coordinator, and three administrative assistants. OCE is responsible for overseeing grantee compliance with various federal laws and regulations that recipients of LSC funds must follow, including specific LSC regulations pertaining to LSC accountability. In particular, OCE reviews grantee compliance with various regulatory provisions, including the following related to fiscal accountability: fee generating cases; use of non-LSC funds and transfers of LSC funds; private attorney involvement; subgrants; membership fees; dues; timekeeping requirements; and attorney’s fees. A summary of these provisions in the fiscal component of OCE reviews is included in appendix III. In 2006, OCE conducted fiscal compliance site visits at 24 of these grantees, OPP conducted program review site visits at 32, and 3 were performed jointly. LSC presents the grantees with any findings arising from the site visits in its exit meetings and a later written report and subsequently monitors grantee actions to resolve them. OCE’s staff totals 15 members, comprised of a Director, 10 attorneys, two fiscal program analysts and two administrative assistants. According to OCE officials, prior to 1994, LSC staff in the OCE predecessor organization conducted internal control reviews and detailed financial statement-related audits. After the transfer of many oversight functions concerning grantees’ financial statement audit responsibilities to the IPAs and the OIG, OCE stopped its financial statement audits as well as its internal control reviews of grantees, even though oversight of grantee financial controls is a basic management responsibility. OCE instead implemented a limited fiscal review of grantee compliance with selected fiscal provisions of LSC regulations. The number of staff performing this function was reduced from 12 to 2. OCE management told us that the reason for this was that fiscal oversight of grantees had become the responsibility of the OIG, which oversees IPA audits that include testing of grantee internal controls. However, LSC management has the responsibility for overseeing grantee financial controls and compliance even if it relies on the IPA audits as the sole basis for its assurance about grantee controls. Further, even LSC management’s reduced oversight role has been further questioned by the OIG. Despite LSC’s shift to a limited compliance oversight role, the OIG recently reported that OCE’s reviews of grantee compliance were duplicative of IPA testing and concluded that most of the LSC regulations tested by OCE are already covered by the OIG’s own guidance and the reviews conducted by IPAs as part of the financial statement audits of grantees. With compliance oversight and monitoring responsibilities divided between OCE and the OIG and program oversight activities being performed by OPP, strong coordination and communication between the three offices and clarity in the roles and responsibilities is critical for achieving effective grantee and program oversight. Under GAO’s Standards for Internal Control in the Federal Government, “For an entity to run and control its operations, it must have relevant, reliable, and timely communications relating to internal as well as external events.” Our discussions with both OIG and LSC management indicated that working relationships and communications between them were strained. OCE staff have expressed confusion about their own roles and responsibility for the more limited fiscal compliance reviews they perform, and there is contention between OCE and OIG over unclear areas of responsibility that dates back to 1995. OCE and OIG officials indicated that to the best of their knowledge no memorandum of understanding or any other documentation implementing the board resolution to clarify the roles and responsibilities of each unit was ever drafted or implemented. We also found communication and coordination weaknesses between OPP and OCE based on interviews with LSC oversight staff, correspondence with the grantee and other documentation related to the joint OCE/OPP oversight visit that we observed, and our own observations of that joint oversight visit. As an example, during our visit to a Las Vegas grantee, we noted a lack of coordination and information sharing between OCE and OPP staff. Specifically, we found conflicting conclusions resulting from the OPP and OCE site visits to that grantee, and a lack of awareness between OPP and OCE about their respective site visits to that grantee. In reporting on an earlier April 2006 site visit in Las Vegas, OPP reported, “Overall, this program is in very good shape. Its delivery structure is sound, its management is excellent, and its case handling staff are performing at a high level.” During our February 2007 observation visit to the same grantee, OCE found it necessary to open an investigation after discovering several significant deficiencies with respect to the grantee’s compliance with LSC regulations. In addition, the OCE team leader on the visit stated that he was unaware of OPP’s programmatic visit. LSC’s Vice President of Program and Compliance stated that both OPP and OCE are required to share summary memorandums of their visits to grantees so that staff are aware of all visits made by both OPP and OCE and properly consider the results of the prior site visit in their own visits when conducting their own reviews. However, as discussed in a later section of this report, LSC’s grantee site visit reports were not being completed in a timely manner, and, therefore, were not available to the respective teams or to LSC management for use in communications and coordination of grantee oversight activities. In response to our finding, LSC officials acknowledged the need to further enhance internal communications and coordination between OPP and OCE to improve the overall efficiency and effectiveness of their oversight visits. LSC does not utilize a structured or systematic approach for assessing risk associated with its 138 grantees as a basis for determining the timing and scope of its grantee oversight visits. According to GAO’s Standards for Internal Control, risk assessment requires identifying and analyzing relevant risks associated with achieving the organization’s objectives and determining how risks should be managed. In determining which grantees to visit, both OPP and OCE use an approach based primarily on time between site visits and the respective office director’s judgments. The director of OCE stated that additional factors OCE considered include: complaints of noncompliance, referrals from the OIG, and discrepancies in reporting case closures. In response to a draft of this report, LSC’s President stated that other risk factors considered by OCE include the results of grantee self-inspections and potential compliance issues identified in OPP program visits and other discussions. The director of OCE also said OCE attempts to visit every grantee on a 5½-year cycle. However, this time-based cycle is not consistently followed. For example, the second largest grant recipient, receiving over $13 million in 2006, has not been visited by OCE since at least 1996. In addition, we noted there was a 7-year lapse between OCE visits to a grantee in Las Vegas, for which OCE, as previously discussed, recently opened an investigation after discovering several significant compliance-related findings. Management has indicated it believes additional grantee reviews are needed but stated that LSC does not have sufficient personnel to do this. OCE occasionally supplements its staff of two analysts that conduct fiscal reviews with an additional contract staff, and officials told us they plan to hire additional staff to conduct site visits on a 3- to 3½-year cycle by 2009. In 2006, LSC had 138 different grantees with more than 900 offices serving all 50 states, the District of Columbia, and current and former U.S. territories and had conducted fiscal compliance reviews at 24 of these grantees (17 percent). With this scope of grantee operations and a limited LSC oversight staff, an approach based on elapsed time and informal judgments is not adequate because it lacks analytical rigor and does not provide adequate assurance that risks are being properly addressed. Specifically, risk analysis should make a reasonable effort to identify risk, including inherent risk, based on all information sources available, assess the significance and likelihood of occurrence of the risk, and factor this in to the decision about scope and timing of oversight visits. However, LSC’s processes are not designed to identify risk in a comprehensive manner by not considering relevant risk factors including, for example, inherent risks due to program size or changes in grantee management or systems. Without a more structured process for selecting grantees to review, LSC does not have an analytical basis to know whether it is has the proper level of staff resources assigned to the grantee review function or whether it is gaining an adequate level of assurance for the number of staff assigned to grantee review activities. LSC’s control activities for monitoring grantee internal control systems do not reasonably assure that grant funds are being used properly and that grantees are in compliance with laws and regulations. OCE’s fiscal oversight was limited in scope, and feedback was not provided to the grantees. At both of our observation visits, we noted that staff did not follow-up on questionable transactions and relied too heavily on information obtained through interviews without corroborating the information. We also noted that LSC did not perform timely follow-up on an investigation into an alleged instance of noncompliance referred to it by the OIG. In addition, LSC has not consistently provided grantees the opportunity to take corrective actions based on findings arising out of the OCE/OPP site visits in a timely manner. As of September 17, 2007, LSC had not yet issued to grantee management almost 19 percent (10 out of 53) of the 2006 LSC reports for which grantee site visits had been completed. In one case we noted that, for unexplained reasons, the review team presented negative findings in a positive light to a grantee and omitted some negative findings from its feedback. Effective grantee monitoring is especially important for LSC because it has limited options for sanctioning poorly performing grantees. LSC’s fiscal reviews did not contain sufficient scope of work to adequately assess grantee internal control or fiscal compliance for purposes of achieving effective oversight. In addition to IPA audits, LSC management relies on its site visits and grantee reviews as a key control activity to monitor grantee fiscal compliance. The fiscal component of an OCE review is limited, and the reviews we observed left out important follow- up to issues that surfaced during interviews and did not address outstanding IPA findings. OCE staff use an OCE guide called Policies and Procedures for On-Site Fiscal Reviews for the fiscal component of OCE reviews. However, the guide is very limited in its scope. During our observation of an OCE site visit, we were told that no previsit preparation is needed and no formalized work program exists for the fiscal component of OCE reviews. The guide’s focus is assessing compliance with selected regulatory provisions and is not a review of grantee internal controls, so it would not, for example, require a review of whether expenditures were properly authorized. In addition, although the fiscal component of an OCE review involves a compliance review of seven LSC regulations, the guide provides a framework for conducting fiscal reviews related to only three of the seven required regulations. Furthermore, the guide does not provide an overall objective of the fiscal compliance review nor does it provide a clear scope or detailed steps for performing the oversight visitation. The approach to OCE site visits relies almost entirely on grantee oral responses to questions and did not include follow-up lines of questioning or requests for supporting evidence. For example, the OCE analyst did not question Greensburg, Pennsylvania, grantee officials about a $30,000 payment to a subgrantee that lacked supporting documentation. When GAO asked the grantee Executive Director about the payment, she stated that the previous Executive Director entered into the subgrant agreement and she did not know anything about the agreement other than the fact that she continued to pay the bill every year. The Executive Director was not able to support the payment, nor did she know the reasons for the payment. The OCE visit did not include review of important documents such as policy and procedure manuals, or verification of crucial financial information. In addition, OCE did not review invoices, perform internal control reviews, or scrutinize questionable items. Our review of information that OCE had also reviewed found that staff did not always follow up on questionable transactions. In reviewing documents already reviewed by the OCE fiscal program analyst during a site visit to Las Vegas, we discovered an improper transaction involving the sale of the grantee’s building that was partially purchased using LSC funds. The analyst did not question the sale or the reason the LSC share of the proceeds from the sale was not returned to the LSC restricted funds account. The grantee had entered into an agreement to sell the building to a developer for $3.6 million. The developer gave the grantee $310,000 as earnest money, and the grantee withdrew $30,000 to use as earnest money towards the expected purchase of a new property. The remaining $280,000 was deposited in an escrow account. However, when the sale of the building fell through, the grantee transferred the funds from the escrow account into its unrestricted general funds account. According to an official at the grantee, this transfer was made to avoid the funds being subjected to LSC regulations. Furthermore, the grantee official stated that he considered it an “enhancement of money.” However, the OCE site visit did not question this unusual transaction, nor was it disclosed in the independent public accountant’s (IPA) annual financial audit. As a result of our bringing this transaction to the attention of OCE, LSC has concluded that the funds should have been designated and spent as LSC restricted income. LSC’s reports of site visits are crucial to communicating and resolving instances of noncompliance in grantee internal controls. LSC, though, has not provided grantees the opportunity to address findings arising out of the OCE/OPP site visits in a timely manner because LSC has been slow to communicate its findings to them. As of September 2007, LSC had not yet issued to grantee management almost 19 percent (10 out of 53) of the 2006 LSC reports for which site visits had been completed. One such visit dates back to January 2006. LSC management stated that this occurs because there is not enough staff to conduct oversight visits and complete reports in a timely manner. Absent timely communications about findings from its site visits, grantee management does not have information about deficiencies and corrective actions needed to address identified deficiencies in their use of funds and improve controls. Furthermore, LSC cannot monitor the status of grantee corrective actions. During OCE compliance visits and in follow-up reviews, OCE attorneys and fiscal program analysts gather and analyze data on grantee compliance with both nonfinancial and financial LSC regulations and conduct an exit meeting with grantee management to present the findings. LSC then develops a report with recommendations that is to be provided to the grantee. OCE officials stated that although LSC policy requires reports to be issued within 90 days of site visits, they generally take much longer. One official also told us that OCE staff do not have the opportunity to complete one report before having to go on another site visit. The official told us that staff do summarize their findings in a memorandum, which is used internally at LSC. One fiscal program analyst told us that not only was he still working on a report which was due last year, but that he also had three other visits he was still working on, and he was planning on visiting three additional sites as well. It will be important to clear up the backlog of unissued reports, especially since LSC’s Vice President for Programs and Compliance stated that LSC plans to increase OCE and OPP staff levels to increase the number of site visits per year. We also found an instance where timely follow-up action was not taken when alleged instances of noncompliance and misuse of funds existed. On November 30, 2004, OCE received a referral from a state comptroller’s office, which reported that an LSC grantee’s Executive Director had misused LSC grant funds. OCE referred the case to the OIG. The OIG found that the Executive Director used LSC grant funds for time and travel unrelated to grantee operations and contributions of LSC funds to other charitable organizations. On November 3, 2005, the OIG referred the results of its investigation back to OCE for follow-up action. LSC management told us that this case has yet to be resolved and attributes the delay to other priorities, including staff shortages. In one case, we noted that, for unexplained reasons, the LSC review team presented mostly positive findings to a grantee during the exit conference when in fact other significant findings were negative. Without a complete report of the instance of noncompliance and potential weaknesses found by the reviewers, grantee management was not afforded the opportunity to respond to those findings, nor did they have the information needed to correct the deficiencies in a timely manner. An exit conference is the standard forum for presenting site visit results prior to issuing the final report. It gives LSC the opportunity to inform grantee management, once the team has finished its planned interviews, tests, and other data- collection activities, about the findings and observations discovered during the visit. It also gives grantee management an opportunity to timely begin addressing problems. However, in an exit conference held to close out a joint OCE-OPP oversight visit in Greensburg, Pennsylvania, we found that the attorneys and fiscal program analyst that performed the review focused on the few positive points that had been observed during the week-long visit. A number of findings that the review team had characterized as significant and in need of immediate attention during the previous day’s meeting to prepare for the exit conference were not communicated as such at the exit conference. In contrast to the discussion regarding the need for improvements at the exit conference, the memorandum prepared for the LSC files to summarize the visit characterizes the grantee as a weak program that faces many challenges. In effect, the exit conference focused on a few positive points rather than the substantial number of significant findings. LSC oversight staff cited staff shortages as the cause for some of the weaknesses in the quality of site visits. Currently, there are only two fiscal program analysts in OCE, and in order to ensure that there is a program analyst available to participate in every OCE grantee visit, it is sometimes necessary to contract with an outside analyst for coverage. Effective grantee monitoring is especially important for LSC because it has limited options for sanctioning or replacing poor-performing grantees. Although LSC has the authority to temporarily suspend funding or terminate all or part of a recipient’s grant, LSC rarely uses this authority. According to LSC, termination is seldom used because it is difficult to find a replacement organization to provide the service. Although the LSC Act provides general enforcement authority to the corporation, LSC must take all practical steps to ensure the continued provision of legal assistance. After a final decision has been issued by LSC terminating financial assistance to a grantee, LSC must implement a new competitive bidding process for the affected service area. In fiscal year 2006, only 5 out of 71 potential grants received multiple bids during the grant renewal process. Because there are few competitors for LSC grants in a given service area, LSC’s competition process does not provide a practical solution for competitive selection when quality issues arise in some cases. Therefore, it is particularly important that LSC effectively and efficiently oversee its grantees to ensure that grant funds are used for intended purposes in accordance with laws and regulations so that grantee weaknesses do not develop into serious weaknesses that would normally call for termination of funding for the grantee. Based on our limited reviews, we identified internal control weaknesses at 9 of the 14 grantees we visited that LSC could have identified with a more effective oversight review regimen. While control deficiencies at the grantees were the immediate cause of improper and potentially improper expenditures, weaknesses in LSC’s oversight controls discussed above negatively affected the effectiveness of its monitoring of grantees’ controls and compliance. Among the control weaknesses we found were grantee use of LSC grant funds for expenditures with insufficient supporting documentation, and for unusual contractor arrangements, alcohol purchases, employee interest-free loans, lobbying fees, late fees, and earnest money. The following two examples show the types of weaknesses we found at the grantees we visited. At 7 out of the 14 grantees we visited, we identified systemic issues involving payments that lacked sufficient supporting documentation. At one grantee, many payments were processed for travel despite the lack of supporting documentation. The lack of documentation made it impossible to determine whether the expenditures were accurate, allowable, and appropriate. At another grantee, certain travel expenses appeared to be improper. At a third grantee, the grantee underwent a change in management in August 2006, and the current Executive Director was unable to locate many of the records and invoices related to payments made under the previous Executive Director. At a fourth grantee, we reviewed six monthly credit card payments and determined that less than 50 percent of the charges had any supporting documentation. At this same grantee, many of the credit card charges that had support lacked sufficient information to determine whether they were a valid use of grant funds. At a fifth grantee, we identified a $30,000 payment to a subgrantee that lacked any supporting documentation. When questioned about the payment, the grantee’s Executive Director stated that the previous Executive Director entered into the agreement and that she did not know anything about the agreement other than the fact that she continued to pay the bill every year. At one grantee, we identified an individual who provided services to the grantee as an information technology (IT) contractor who was paid approximately $750,000 between 2004 and 2006. The individual was engaged to operate the organization’s IT servers and maintain the network. The individual told us that he had worked at the grantee since 2001. When we inquired as to why he did not work at the grantee as an employee, he stated that there were benefits to being an independent contractor. We noted the following facts that cause us to question the contractor arrangement: The contractor’s office and mailing address was located in the same office space as the grantee. The grantee could not locate its contract with the individual for 2005 and 2006. The contractor’s business card was identical to other employees working at the grantee. Two grantee employees worked for and were supervised by the contractor. The contractor indicated that the organization occasionally reimburses him for work-related training costs. See appendix II for additional detailed information related to our findings at the grantees we visited. We presented LSC management with the results of our analysis supporting each of our findings related to our grantee visits. LSC management expressed commitment to take action to resolve these matters in coordination with the grantees. Effective internal controls over grants and grantee oversight are critical to LSC as its very mission and operations rely extensively on grantees to provide legal services to people who otherwise could not afford to pay for adequate legal counsel. Effective grants-oversight procedures and monitoring, including a structured, systematic approach based on risk, are necessary given LSC’s limited resources and the scope of its responsibilities for many widely dispersed entities. In addition, the shared responsibilities for grantee oversight between LSC management and OIG presents risks that can be mitigated with clear lines of authority and responsibility and effective communications and coordination across oversight offices to avoid unnecessary duplication where possible. Finally, given the number of grantees, a sound risk-based approach for determining timing and scope of site visits is key to prioritizing resource allocations to reflect the varying risks presented by grantees. To maximize the effectiveness of each site visit, LSC needs to conduct its oversight visits with sufficient scope to target areas of greatest risk, follow up on information and results of prior reviews and audits, and employ a review scope and approach that is tailored to specific risks. With high- quality targeted reviews and management that promptly informs grantees about findings and provides them an opportunity to correct them, risk can be mitigated. To help LSC improve its internal control and oversight of grantees, we recommend that the LSC Board of Directors develop and implement policies that clearly delineate organizational roles and responsibilities for grantee oversight and monitoring, including grantee internal controls and compliance. To help LSC improve its internal control and oversight of grantees, we recommend that LSC management develop and implement the following: Policies and procedures for information sharing among the OIG, OCE, and OPP and coordination of OCE and OPP site visits. An approach for selecting grantees for internal control and compliance reviews that is founded on risk-based criteria, uses information and results from oversight and audit activities, and is consistently applied. Procedures to improve the effectiveness of the current LSC fiscal compliance reviews by revising LSC’s current guidelines to provide a direct link to the results of OPP reviews and OIG and IPA audit guidance for performing follow-up on responses from grantee examples of fiscal and internal control review procedures that may be appropriate based on individual risk factors and circumstances at grantees. In addition to the above improvements to LSC’s oversight of grantees, we also recommend that LSC management perform follow-up on each of the improper or potentially improper uses of grant funds that we identified in this report. We received written comments on a draft of this report from the Chairman on behalf of LSC’s Board of Directors and LSC’s President on behalf of LSC’s management (which are reprinted in apps. IV and V). Both the Chairman and the President expressed their full commitment to making the improvements noted in the report, concurred with all of our recommendations, and outlined the actions that LSC’s board and management plan to take in response to our recommendations. LSC management also separately provided technical comments that we incorporated into the report as appropriate. LSC’s President also suggested three clarifications to our report. First, LSC management stated that “the draft report does not sufficiently address the fact that in 1996 Congress mandated that the LSC OIG have oversight responsibility for all audit work performed by independent public accountants (IPA) and the report should include a fuller discussion of the role of the IPAs in the financial oversight of grantees.” We added language to our report to augment our discussion of how the 1996 Act clarified that grantee financial statement and compliance audits are performed by IPAs and overseen by OIG. While these audits serve as an accountability mechanism, they are performed after the fact, and do not include all the grantee oversight objectives and procedures that would be expected of LSC management as part of its responsibilities to manage the affairs of the corporation, such as its grants program and to monitor its grantees to ensure compliance with all applicable laws, regulations, and grant terms. Second, LSC’s President states that “the draft report supports its conclusion about limited coordination of the work of OCE and OPP with an isolated example from one grantee visit and fails to note the range of communication and coordination that actually exists between these offices.” We provide the example in the report to illustrate the effect on grantee oversight. Our conclusion about the need for improved communication and coordination were also based on interviews with LSC staff and our assessment of LSC’s control environment during the course of our work. Third, the LSC President stated that while LSC can and will expand its criteria and use of a risk-based approach for assessing risk of weaknesses at its grantees, the draft report did not include all of the risk- based criteria that LSC currently uses in selecting grantees for on-site reviews. We modified our report to add language recognizing that LSC considers the results of grantee self-inspections and potential compliance issues identified in OPP program visits and other discussions in selecting grantees for on-site reviews. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies to other appropriate congressional committees, the President of LSC, and the LSC Board of Directors. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9471 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report contains the results of our review of internal controls over the Legal Services Corporation’s (LSC) grantee monitoring and oversight function and our limited visits to grantees. In performing our work, we (1) evaluated LSC’s control environment, information and communications, and risk assessment procedures related to its grants management and oversight organizations; (2) reviewed LSC’s control activities for monitoring grantee management and compliance, and (3) performed limited reviews at 14 grantees. To evaluate LSC’s control environment, information and communications, and risk assessment procedures related to its grants management and oversight, we interviewed LSC and Office of Inspector General (OIG) management officials and reviewed board meeting minutes and LSC policy documents. To obtain an understanding of LSC’s internal control framework, including the oversight of grantees, we reviewed LSC policies and procedure manuals and reviewed LSC OIG, Office of Program Performance (OPP), and Office of Compliance and Enforcement (OCE) reports. In addition, we accompanied LSC staff on oversight visits to Las Vegas, Nevada, and Greensburg, Pennsylvania. During these visits, we reviewed grant agreements, observed LSC interviews with entity officials and external parties, evaluated grantee policies and procedure manuals, discussed the objectives of each visit with the LSC team leader, attended the grantee entrance and exit conferences, and observed testing performed by OCE. We also conducted fieldwork at LSC, observed LSC staff on 2 of their grantee oversight visits, and conducted 12 of our own grantee site visits. Specifically, we systematically selected 8 of our grantee site visits using a dollar unit sample of LSC’s calendar year 2006 grants. The grantees selected were located in Oakland, California; Tampa, Florida; Chicago, Illinois; Detroit, Michigan; Camden, New Jersey; New York, New York; Cleveland, Ohio; and Seattle, Washington. In addition, in order to include additional small grantees in our site visits, we randomly selected 2 additional grantees with 2006 grant amounts below the median grant amounts for 2006. The grantees selected were Window Rock, Arizona, and Philadelphia, Pennsylvania. Finally, we selected Washington, D.C., as a pilot program for our visits due to its proximity to GAO, and we selected Casper, Wyoming, because it had received month-to-month funding as a disciplinary sanction in 2006. At all of these locations, we analyzed key records and interviewed entity officials to obtain an understanding of LSC’s internal control framework, including the oversight of grantees, and assessed compliance of expenditures. Our grantee site reviews were limited in scope and were not sufficient for expressing an opinion on the effectiveness of grantee internal controls or compliance. To assess the appropriateness of grantee expenditures, we performed expenditure testing during our grantee site visits. The testing included reviewing invoices, vendor lists, and general ledger details. The appropriateness of grantee expenditures was based on the grant agreements and applicable laws. We classified expenditures as improper or potentially improper if they were not supported by sufficient documentation to enable an objective third party to determine if the expenditure was a valid use of grant funds or if the expenditure was specifically prohibited by applicable laws and regulations. For the findings we classified as improper or potentially improper, we found as applicable one or more of the following: (1) systemic issues with insufficient supporting documentation for the goods or services LSC money was paying for, (2) unusual contractor arrangements without sufficient support or justification, and (3) improper use of grant funds. We conducted our work from September 2006 through September 2007 in accordance with generally accepted government auditing standards. Examples of our findings from our limited visits to 14 grantees are presented in this appendix. These examples are not all-inclusive of the findings we identified and are not necessarily representative of the population of expenditures from which they were selected. We identified three grantees that used Legal Services Corporation (LSC) funds to purchase alcoholic beverages. LSC grantees are required by law to use LSC funds only for allowable purposes, and LSC management has issued implementing regulations on cost standards and procedures. LSC regulations do not directly address alcoholic beverages, but they permit LSC management to resolve issues arising from questioned costs by looking to Office of Management and Budget (OMB) circulars, such as OMB Circular No. A-122, Cost Principles for Non-Profit Corporations, when such circulars contain relevant policies or criteria that are not inconsistent with LSC statutes, regulations, and guidance. Appendix B of OMB Circular No. A-122, Selected Items of Cost, provides guidance on the allowability of the direct or indirect cost of the selected items. Appendix B’s item no. 3, Alcoholic Beverages, states that the costs of alcoholic beverages are unallowable and provides for no exceptions. Because this guidance is not mandatory for LSC, LSC management must make the final decision on whether alcohol purchases are allowable. The Executive Director at one grantee stated that the program would never use LSC funds to purchase alcohol during trips or other organizational functions. When we provided her copies of invoices showing alcohol purchases, she indicated that she was not aware of the expenditures and would have to investigate. She later explained that one of the invoices totaling $2,800 was a payment to another organization for the cost of beer and wine for an annual spring reception held for college student interns. In addition, she explained that the $128 in alcohol on a second invoice was part of a $725 staff dinner party in Washington D.C., and that, to the best of her knowledge, those funds were reimbursed to the grantee. At another grantee, we identified invoices containing wine purchases for company events. The Executive Director immediately recognized that this was an issue and stated that he would ensure that LSC funds are no longer used to purchase alcohol. We identified a grantee that was using LSC funds to provide interest-free loans to employees upon request as an employee benefit. The use of the loans included, but was not limited to, paying college tuition, making down payments on personal residences, and purchasing personal computers. According to the grantee’s Controller, employees are not required to sign a contract, but the grantee does try to have the employees pay off the loans through payroll deductions to ensure collection. Furthermore, she stated that the total amount of loans outstanding at any one time typically does not exceed $10,000. When asked to provide support for the loans, the Controller stated that she did not believe any specific supporting documentation existed. During our site visit, the Controller prepared a list of employee loans outstanding as of December 31, 2006. Since controls over the loans are nonexistent, we were unable to determine the completeness of this list. LSC grant funds are required by law to be used to support the provision of legal assistance in civil matters to low-income people for everyday legal problems. We identified no authority to use LSC grant funds for interest- free or other loans to grantee employees. We identified two instances in which one grantee was using LSC funds to pay for lobbyist registration fees. The Legal Services Corporation Act imposes a broad limitation on LSC grantees using LSC funds in a manner that would directly or indirectly influence legislation or other official action at the local, state, or federal government levels and requires LSC management to ensure that these limitations are not violated. With only limited exceptions, LSC grantees cannot use LSC funds to pay for any costs related to lobbying, including lobbying registration fees. The registration fee in each instance we identified was $50. The Executive Director of the program agreed that in this instance using LSC funds for lobbyist registration fees was a violation of the grant agreement. In addition, he stated that he would take additional steps to ensure that LSC funds are no longer used for expenses related to lobbying. Three of the grantees that we visited used LSC funds to pay late fees on overdue accounts for goods and services purchased. LSC regulations on cost standards and procedures provide that expenditures by a grantee are only allowable if the grantee can demonstrate that the expenditures were reasonable and necessary for performance of the grant, meaning that they were the type that would have been performed by a prudent person in similar circumstances at the time the decision to incur the cost was made. One grantee routinely failed to make payments on time, creating tension with several of its vendors. We found numerous communications from vendors regarding late payments. In one instance, the vendor sent a third notice of action to this grantee stating that the rent for the grantee’s unit or office space remained unpaid. The vendor threatened to place a lien against the goods in the unit and sell them at a public auction to satisfy the overdue balance if the overdue balance was not paid within 15 days. Systemic failure to pay bills on time is an indication of weak internal controls. All three Executive Directors agreed that there was no excuse for the inability to make payments on time. We view payments made under these circumstances as imprudent and unreasonable and, therefore, unallowable. We discovered an improper transaction at one grantee involving the sale of a grantee building that was purchased using both LSC and non-LSC funds. The grantee had entered into an agreement to sell the building to a developer for $3.6 million. The developer gave the grantee $310,000 as earnest money, and the grantee withdrew $30,000 to use as earnest money towards the expected purchase of a new building. The remaining $280,000 was deposited in an escrow account. However, when the sale of the building fell through, the grantee transferred the funds from the escrow account into its unrestricted general funds account. According to an official at the grantee, this transfer was made to avoid the funds being subjected to LSC regulations. Furthermore, the grantee official stated that he considered it an “enhancement of money.” As a result of our bringing this transaction to the attention of the Office of Compliance and Enforcement, LSC has concluded that the funds should have been designated and spent as LSC restricted income. In addition to the contact named above, Paul Caban, Blake M. Carpenter, Bonnie L. Derby, Lisa M. Galvan-Trevino, Maxine Hattery, Erik S. Huff, Keith H. Kronin, and Margaret Mills made key contributions to this report. F. Abe Dymond and Lauren S. Fassler provided technical assistance. | The Legal Services Corporation (LSC) was created as a private nonprofit to support legal assistance for low-income people to resolve their civil legal matters and relies heavily on federal appropriations. In 2006, LSC distributed most of its $327 million in grants to support such assistance. Effective internal controls over grants and oversight of grantees are critical to LSC's mission. GAO was asked to determine whether LSC's internal controls over grants management and oversight processes provide reasonable assurance that grant funds are used for their intended purposes. GAO analyzed key records and interviewed agency officials to obtain an understanding of LSC's internal control framework, including the monitoring and oversight of grantees, and performed limited reviews of internal controls and compliance at 14 grantees. GAO found weaknesses in LSC's internal controls over grants management and oversight of grantees that negatively affect LSC's ability to provide assurance that grant funds are being used for their intended purposes in compliance with applicable laws and regulations. Effective internal controls over grants and grantee oversight are critical to LSC as its very mission and operations rely extensively on grantees to provide legal services to people who otherwise could not afford to pay for adequate legal counsel. GAO also found poor fiscal practices and improper and potentially improper expenditures at grantees it visited. Weaknesses in LSC's control environment include the lack of clear definition in the responsibilities of two of the three organizational units that oversee the work of grantees. GAO also found that communication between oversight units and coordination of grantee site visits is not sufficient to prevent gaps or duplication of effort, or both. The timing and scope of site visits is not based on a systematic analysis of the risk of noncompliance or financial control weakness across LSC's 138 grantees, so LSC cannot determine whether its resources are being used effectively and efficiently to mitigate risk among its grantees. LSC control activities performed in the monitoring of grantee internal control were not sufficient in scope to achieve effective oversight, and GAO noted implementation weaknesses. For example, in the site visits GAO observed, staff did not follow up on questionable transactions and relied heavily on information obtained through interviews. Feedback to grantees was often delayed, preventing grantees from correcting deficiencies in a timely manner. As of September 2007, LSC had not yet issued reports to grantee management for about 19 percent (10 out of 53) of the 2006 site visits. LSC grantee reviews missed potential control deficiencies at grantees that could have been detected with more effective oversight as evidenced by weaknesses GAO found at 9 of the 14 grantee sites it visited. While control deficiencies at the grantees were the immediate cause of the problems GAO found, weaknesses in LSC's controls over its oversight of grantees did not assure effective monitoring of grantee controls and compliance. Among the questionable expenditures GAO found were grantee use of funds for expenditures with insufficient supporting documentation, unusual contractor arrangements, alcohol purchases, employee interest-free loans, lobbying fees, late fees, and earnest money. |
US-VISIT is a governmentwide program intended to enhance the security of U.S. citizens and visitors, facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. The scope of the program includes the pre-entry, entry, status, and exit of hundreds of millions of foreign national travelers who enter and leave the United States at over 300 air, sea, and land ports of entry, as well as analytical capabilities spanning this overall process. To achieve its goals, US-VISIT uses biometric information (digital fingerscans and photographs) to verify identity and screen persons against watch lists. In many cases, the US-VISIT process begins overseas, at U.S. consular offices, which collect biometric information from applicants for visas, and check this information against a database of known criminals and suspected terrorists. When a visitor arrives at a port of entry, the biometric information is used to verify that the visitor is the person who was issued the visa or other travel documents. Ultimately, visitors are to confirm their departure by having their visas or passports scanned and undergoing fingerscanning. (Currently, at a few pilot sites, departing visitors are asked to undergo these exit procedures.) The exit confirmation is added to the visitor’s travel records to demonstrate compliance with the terms of admission to the United States. Other key US-VISIT functions include ● collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their admission; (2) may be eligible to receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; ● detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and ● facilitating information sharing and coordination within the immigration and border management community. In July 2003, DHS established a program office with responsibility for managing the acquisition, deployment, operation, and sustainment of the US-VISIT system and its associated supporting people (e.g., Customs and Border Protection officers), processes (e.g., entry/exit policies and procedures), and facilities (e.g., inspection booths and lanes). As of October 2005, about $1.4 billion has been appropriated for the program, and according to program officials, about $962 million has been obligated to acquire, develop, deploy, operate, and maintain US-VISIT entry capabilities, and to test and evaluate exit capability options. DHS plans to deliver US-VISIT capability in four increments, with Increments 1 through 3 being interim, or temporary, solutions that fulfill legislative mandates to deploy an entry/exit system, and Increment 4 being the implementation of a long-term vision that is to incorporate improved business processes, new technology, and information sharing to create an integrated border management system for the future. In Increments 1 through 3, the program is building interfaces among existing (“legacy”) systems, enhancing the capabilities of these systems, and deploying these capabilities to air, sea, and land ports of entry. These first three increments are to be largely acquired and implemented through existing system contracts and task orders. In May 2004, DHS awarded an indefinite-delivery/indefinite-quantity prime contract to Accenture and its partners. According to the contract, the prime contractor will help support the integration and consolidation of processes, functionality, and data, and it will develop a strategy to build on the technology and capabilities already available to produce the strategic solution, while also assisting the program office in leveraging existing systems and contractors in deploying the interim solutions. Increment 1 concentrates on establishing capabilities at air and sea ports of entry. It is divided into two parts—1A and 1B. ● Increment 1A (air and sea entry) includes the electronic capture and matching of biographic and biometric information (two digital index fingerscans and a digital photograph) for selected foreign nationals, including those from visa waiver countries. Increment 1A was deployed on January 5, 2004, through the modification of pre- existing systems. These modifications accommodated the collection and maintenance of additional data fields and established interfaces required to share data among DHS systems in support of entry processing at 115 airports and 14 seaports. ● Increment 1B (air and sea exit) involves the testing of exit devices to collect biometric exit data for select foreign nationals. Three exit alternatives were pilot tested at 11 air and sea ports of entry. These alternatives are as follows. ● Kiosk—A self-service device (including a touch screen interface, document scanner, finger scanner, digital camera, and receipt printer) that captures a digital photograph and fingerprint and prints out an encoded receipt. ● Mobile device—A hand-held device that is operated by a workstation attendant and includes a document scanner, finger scanner, digital camera, and receipt printer to capture a digital photograph and fingerprint. ● Validator—A hand-held device that is used to capture a digital photograph and fingerprint, which are then matched to the photograph and fingerprint captured via the kiosk and encoded in the receipt. Increment 2 focuses primarily on extending US-VISIT to land ports of entry. It is divided into three parts—2A, 2B, and 2C. ● Increment 2A (air, sea, and land entry) includes the capability to biometrically compare and authenticate valid machine-readable visas and other travel and entry documents at all ports of entry. Increment 2A was deployed on October 23, 2005, according to program officials. It also includes the deployment by October 26, 2006, of the capability to read biometrically enabled passports from visa waiver countries. ● Increment 2B (land entry) redesigned the Increment 1 entry solution and expanded it to the 50 busiest land ports of entry. The process for issuing entry/exit forms was redesigned to enable the electronic capture of biographic, biometric (unless the traveler is exempt), and related travel documentation for arriving travelers. This increment was deployed to the busiest 50 U.S. land border ports of entry on December 29, 2004. Before Increment 2B, all information on the entry/exit forms was hand written. The redesigned process provides for electronically capturing the biographic data on the entry/exit form. In some cases, Customs and Border Protection (CBP) officers enter the data electronically and then print the completed form. ● Increment 2C (land entry and exit) is to provide the capability to automatically, passively, and remotely record the entry and exit of covered individuals using radio frequency (RF) technology tags at primary inspection and exit lanes. This tag includes a unique ID number that is to be embedded in each entry/exit form, thus associating a unique number with a US-VISIT record for the person holding that form. One of DHS’s goals in using this technology is to improve the ability to collect entry and exit information. In August 2005, the program office deployed the technology to three land ports of entry to verify the feasibility of using passive RF technology to record traveler entries and exits from the number embedded in the entry/exit form. The results of this demonstration are to be reported in February 2006. Increment 3 extended Increment 2B (land entry) capabilities to 104 land ports of entry; this increment was essentially completed as of December 19, 2005. Increment 4 is the strategic US-VISIT program capability, which program officials stated will likely consist of a further series of incremental releases or mission capability enhancements that will support business outcomes. The program reports that it has worked with its prime contractor and partners to develop this overall vision for the immigration and border management enterprise. All increments before Increment 4 depend on the interfacing and integration of existing systems, including the following: ● The Arrival and Departure Information System (ADIS) stores ● noncitizen traveler arrival and departure data received from air and sea carrier manifests, ● arrival data captured by CBP officers at air and sea ports of ● I-94 issuance data captured by CBP officers at Increment 2B land ● departure information captured at US-VISIT biometric departure pilot (air and sea) locations, ● pedestrian arrival information and pedestrian and vehicle departure information captured at Increment 2C port of entry locations, and ● status update information provided by SEVIS and CLAIMS 3 (described below). ADIS provides record matching, query, and reporting functions. ● The passenger processing component of the Treasury Enforcement Communications System (TECS) includes two systems: Advance Passenger Information System (APIS), a system that captures arrival and departure manifest information provided by air and sea carriers, and the Interagency Border Inspection System, a system that maintains lookout data and interfaces with other agencies’ databases. CBP officers use these data as part of the admission process. The results of the admission decision are recorded in TECS and ADIS. ● The Automated Biometric Identification System (IDENT) collects and stores biometric data about foreign visitors. ● The Student and Exchange Visitor Information System (SEVIS) and the Computer Linked Application Information Management System (CLAIMS 3) contain information on foreign students and foreign nationals who request benefits, such as change of status or extension of stay. Some of these systems, such as IDENT, are managed by the program office, while some systems are managed by other organizational entities within DHS. For example, TECS is managed by CBP, SEVIS is managed by Immigration and Customs Enforcement, CLAIMS 3 is under United States Citizenship and Immigration Services, and ADIS is jointly managed by CBP and US-VISIT. US-VISIT also interfaces with other, non-DHS systems for relevant purposes, including watch list updates and checks to determine whether a visa applicant has previously applied for a visa or currently has a valid U.S. visa. In particular, US-VISIT receives biographic and biometric information from the Department of State’s Consular Consolidated Database as part of the visa application process, and returns fingerscan information and watch list changes. Over the last 3 years, US-VISIT program officials and supporting contractor staff have worked to meet challenging legislative time frames, as well as a DHS-imposed requirement to use biometric identifiers. Under law, for example, DHS was to create an electronic entry and exit system to screen and monitor the stay of foreign nationals who enter and leave the United States and implement the system at (1) air and sea ports of entry by December 31, 2003, (2) the 50 highest-volume land ports of entry by December 31, 2004, and (3) the remaining ports of entry by December 31, 2005. It was also to provide the means to collect arrival/departure data from biometrically enabled and machine-readable travel documents at all ports of entry. To the program office’s credit, it has largely met its obligations relative to an entry capability. For example, on January 5, 2004, it deployed and began operating most aspects of its planned entry capability at 115 airports and 14 seaports, and added the remaining aspects in February 2005. During 2004, it also deployed and began operating this entry capability in the secondary inspection areas of the 50 highest volume land ports of entry. As of December 19, 2005, it had deployed and begun operating its entry capability at all but 1 of the remaining 104 land ports of entry. The program has also been working to define feasible and cost-effective exit solutions, including technology feasibility testing at 3 land ports of entry and operational performance evaluations at 11 air and sea ports of entry. Moreover, the development and deployment of this entry capability has occurred during a period of considerable organizational change, starting with the creation of DHS from 23 separate agencies in early 2003, followed by the establishment of a US-VISIT program office shortly thereafter—which was only about 5 months before it had to meet its first legislative milestone. Compounding these program challenges was the fact that the systems that were to be used in building and deploying an entry capability were managed and operated by a number of the separate agencies that had been merged to form the new department, each of which was governed by different policies, procedures, and standards. As a result of the program’s efforts to deploy and operate an entry capability, DHS reports that it has been able to apprehend and prevent the entry of hundreds of criminal aliens: as of March 2005, DHS reported that more than 450 people with records of criminal or immigration violations have been prevented from entering. For example, its biometric screening prevented the reentry of a convicted felon, previously deported, who was attempting to enter under an alias; standard biographic record checks using only names and birth dates would have likely cleared the individual. Another potential consequence, although difficult to demonstrate, is the deterrent effect of having an operational entry capability. Although deterrence is not an expressly stated goal of the program, officials have cited it as a potential byproduct of having a publicized capability at the border to screen entry on the basis of identity verification and matching against watch lists of known and suspected terrorists. Accordingly, the deterrent potential of the knowledge that unwanted entry may be thwarted and the perpetrators caught is arguably a layer of security that should not be overlooked. A prerequisite for prudent investment in programs is having reasonable assurance that a proposed course of action is the right thing to do, meaning that it properly fits within the larger context of an agency’s strategic plans and related operational and technology environments, and that the program will produce benefits in excess of costs over its useful life. We have made recommendations to DHS aimed at ensuring that this is in fact the case for US-VISIT, and the department has taken steps intended to address our recommendations. These steps, however, have yet to produce sufficient analytical information to demonstrate that US-VISIT as defined is the right solution. Without this knowledge, investment in the program cannot be fully justified. Agency programs need to properly fit within a common strategic context or frame of reference governing key aspects of program operations—e.g., what functions are to be performed by whom, when and where they are to be performed, what information is to be used to perform them, and what rules and standards will govern the application of technology to support them. Without a clear operational context for US-VISIT, the risk is increased that the program will not interoperate with related programs and thus not cost-effectively meet mission needs. In September 2003 we reported that DHS had not defined key aspects of the larger homeland security environment in which US- VISIT would need to operate. For example, certain policy and standards decisions had not been made, such as whether official travel documents would be required for all persons who enter and exit the country—including U.S. and Canadian citizens—and how many fingerprints would be collected. Nonetheless, program officials were making assumptions and decisions at that time that, if they turned out to be inconsistent with subsequent policy or standards decisions, would require US-VISIT rework. To minimize the impact of these changes, we recommended that DHS clarify the context in which US-VISIT is to operate. About 28 months later, defining this operational context remains a work in progress. For example, the program’s relationships and dependencies with other closely allied initiatives and programs are still unclear. According to the US-VISIT Chief Strategist, an immigration and border management strategic plan was drafted in March 2005 that shows how US-VISIT is aligned with DHS’s organizational mission and that defines an overall vision for immigration and border management. According to this official, the vision provides for an immigration and border management enterprise that unifies multiple internal departmental and other external stakeholders with common objectives, strategies, processes, and infrastructures. As of December 2005, however, we were told that this strategic plan has not been approved. In addition, since the plan was drafted, DHS has reported that other relevant initiatives have been undertaken. For example: ● The DHS Security and Prosperity Partnership of North America is to, among other things, establish a common approach to securing the countries of North America—the United States, Canada, and Mexico—by, for example, implementing a border facilitation strategy to build capacity and improve the legitimate flow of people and cargo at our shared borders. ● The DHS Secure Border Initiative is to implement a comprehensive approach to securing our borders and combating illegal immigration. According to the Chief Strategist, portions of the strategic plan are being incorporated into these initiatives, but these initiatives and their relationships with US-VISIT are still being defined. Similarly, the mission and operational environment of US-VISIT are related to those of another major DHS program—the Automated Commercial Environment (ACE), which is a new trade processing system that is planned to support the movement of legitimate imports and exports and to strengthen border security. In addition, both US-VISIT and ACE could potentially use common IT infrastructures and services. As we reported in February 2005, the program office recognized these similarities, but managing the relationship between the two programs had not been a priority matter. Accordingly, we recommended that DHS give priority to understanding the relationships and dependencies between the US- VISIT and ACE programs. Since our recommendation, the US-VISIT and ACE managers have formed an integrated project team to, among other things, ensure that the two programs are programmatically and technically aligned. Program officials stated that the team has met three times since April 2005 and plans to meet on a quarterly basis going forward. The team has discussed potential areas of focus and agreed to three areas: RF technology, program control, and data governance. However, it does not have an approved charter, and it has not developed explicit plans or milestone dates for identifying the dependencies and relationships between the two programs. It is important that DHS define the operational context for US-VISIT, as well as its relationships and dependencies with closely allied initiatives and such programs as ACE. The more time it takes to settle these issues, the more likely that extensive and expensive rework will be needed at a later date. Prudent investment also requires that an agency have reasonable assurance that a proposed program will produce mission value commensurate with expected costs and risks. Thus far, DHS has yet to develop an adequate basis for knowing that this is the case for its early US-VISIT increments. Without this knowledge, it cannot adequately ensure that these increments are justified. Assessments of costs and benefits are extremely important, because the decision to invest in any capability should be based on reliable analyses of return on investment. According to OMB guidance, individual increments of major systems are to be individually supported by analyses of benefits, cost, and risk. In addition, OMB guidance on the analysis needed to justify investments states that such analysis should meet certain criteria to be considered reasonable. These criteria include, among other things, comparing alternatives on the basis of net present value and conducting uncertainty analyses of costs and benefits. (DHS has also issued guidance on such economic analyses, which is consistent with that of OMB.) Without reliable analyses, an organization cannot be reasonably assured that a proposed investment is a prudent and justified use of resources. In September 2003, we reported that the program had not assessed the costs and benefits of Increment 1. Accordingly, we recommended that DHS perform such assessments for future increments. In February 2005, we reported that although the program office had developed a cost-benefit analysis for Increment 2B (which provides the capability for electronic collection of traveler information at land ports of entry), it had again not justified the investment, because its treatment of both benefits and costs was unclear and insufficient. Further, we reported that the cost estimates on which the cost-benefit analysis was based were of questionable reliability, because effective cost-estimating practices were not followed. Accordingly, we recommended that DHS follow certain specified practices for estimating the costs of future increments. Since our February 2005 report, the program has developed a cost- benefit analysis for Increment 1B (which is to provide exit capabilities at air and sea ports of entry). The latest version of this analysis, dated June 23, 2005, identifies potential costs and benefits for three exit solutions at air and sea ports of entry and provides a general rationale for the viability of the three alternatives described. This latest analysis meets some but not all the OMB criteria for economic analyses. For example, it explains why the investment was needed, and it shows that at least two alternatives to the status quo were considered. However, it does not include, for example, a complete uncertainty analysis for the three exit alternatives evaluated. That is, it does not include a sensitivity analysis for the three alternatives, which is a major part of an uncertainly analysis. (A sensitivity analysis is a quantitative assessment of the effect that a change in a given assumption—such as unit labor cost—will have on net present value.) A complete analysis of uncertainty is important because it provides decision makers with a perspective on the potential variability of the cost and benefit estimates should the facts, circumstances, and assumptions change. In addition, the quality of a cost-benefit analysis is dependent on the quality of the cost assessments on which it is based. However, the cost estimate associated with the June 2005 cost-benefit analysis for the three exit solutions (Increment 1B) did not meet key criteria for reliable cost estimating. For example, it did not include a detailed work breakdown structure. A work breakdown structure serves to organize and define the work to be performed, so that associated costs can be identified and estimated. Thus, it provides a reliable basis for ensuring that the estimates include all relevant costs. Program officials stated that they recognize the importance of developing reliable cost estimates and have initiated actions to more reliably estimate the costs of future increments. For example, the program has chartered a cost analysis process action team, which is to develop, document, and implement a cost analysis policy, process, and plan for the program. Program officials also stated that they have hired additional contracting staff with cost-estimating experience. Strengthening the program’s cost-estimating capability is extremely important. The absence of reliable cost estimates impedes, among other things, both the development of reliable economic justification for program decisions and the effective measurement of performance. Program decisions and planning depend on adequate analyses and assessments of program impacts and options. The department has begun to develop such analyses, but some of these, such as its analyses of the operational impact of Increment 2B and of the options for its exit capability, do not yet provide an adequate basis for investment and deployment decisions. We reported in May 2004 that the program had not assessed its workforce and facility needs for Increment 2B (which provides the capability for electronic collection of traveler information at land ports of entry). Because of this, we questioned the validity of the program’s assumptions and plans concerning workforce and facilities, since the program lacked a basis for determining whether its assumptions were correct and thus whether its plans were adequate. Accordingly, we recommended that DHS assess the full impact of Increment 2B on workforce levels and facilities at land ports of entry, including performing appropriate modeling exercises. Seven months later, the program office evaluated Increment 2B operational performance, with the stated purpose of determining the effectiveness of Increment 2B performance at the 50 busiest land ports of entry. For this evaluation, the program office established a baseline for comparing the average times to issue and process entry/exit forms at 3 of these 50 ports of entry. The program office then conducted two evaluations of the processing times at the three ports, first after Increment 2B was deployed as a pilot, and next 3 months later, after it was deployed to all 50 ports of entry. The evaluation results showed that the average processing times decreased for all three sites. Program officials concluded that these results supported their workforce and facilities planning assumptions that no additional staff was required to support deployment of Increment 2B and that minimal modifications were required at the facilities. However, the scope of the evaluations is not sufficient to satisfy the evaluations’ stated purpose or our recommendation for assessing the full impact of 2B. For example, the selection of the three sites, according to program officials, was based on a number of factors, including whether the sites already had sufficient staff to support the pilot. Selecting sites based on this factor could affect the results, and it presupposes that not all ports of entry have the staff needed to support 2B. In addition, evaluation conditions were not always held constant: specifically, fewer workstations were used to process travelers in establishing the baseline processing times at two of the ports of entry than were used during the pilot evaluations. Moreover, CBP officials from a land port of entry that was not an evaluation site (San Ysidro) told us that US-VISIT deployment has not reduced but actually lengthened processing times. (San Ysidro processes the highest volume of travelers of all land ports of entry.) Although these officials did not provide specific data to support their statement, their perception nevertheless raises questions about the potential impact of Increment 2B on the 47 sites that were not evaluated. Similarly, in February 2005, we reported that US-VISIT had not adequately planned for evaluating the alternatives for Increment 1B (which provides exit capabilities at air and sea ports of entry) because the scope and timeline of its exit pilot evaluation were compressed. Accordingly, we recommended that DHS reassess plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions. Over the last 11 months, the program office has taken actions to expand the scope and time frames of the pilot. For example, it increased the number of ports of entry in the pilot from 5 to 11, and it also extended the time frame by about 7 months. Further, according to program officials, they were able to achieve the target sample sizes necessary to have a 95 percent confidence level in their results. Nevertheless, questions remain about whether the exit alternatives have been adequately evaluated to permit selection of the best exit solution for national deployment. For example, one of the criteria against which the alternatives were evaluated was the rate of traveler compliance with US-VISIT exit policies (that is, foreign travelers providing information as they exit the United States). However, across the three alternatives, the average compliance with these policies was only 24 percent, which raises questions as to their effectiveness. The evaluation report cites several reasons for the low compliance rate, including that compliance during the pilot was voluntary. The report further concludes that national deployment of the exit solution will not meet the desired compliance rate unless the exit process incorporates an enforcement mechanism, such as not allowing persons to reenter the United States if they do not comply with the exit process. Although an enforcement mechanism might indeed improve compliance, program officials stated that no formal evaluation has been conducted of enforcement mechanisms or their possible effect on compliance. The program director agreed that additional evaluation is needed to assess the impact of implementing potential enforcement mechanisms and plans to do such evaluation. Establishing effective program management capabilities is important to ensure that an organization is going about delivering a program in the right way. Accordingly, we have made recommendations to establish specific people and process management capabilities. While DHS is making progress in implementing many of our recommendations in this area, this progress has often been slow. One area in which DHS has made good progress is in implementing our recommendations to establish the human capital capabilities necessary to manage US-VISIT. In September 2003, we reported that the US-VISIT program had not fully staffed or adequately funded its program office or defined specific roles and responsibilities for program office staff. Our prior experience with major acquisitions like US-VISIT shows that to be successful, they need, among other things, to have adequate resources, and program staff need to understand what they are to do, how they relate to each other, and how they fit in their organization. In addition, prior research and evaluations of organizations show that effective human capital management can help agencies establish and maintain the workforce they need to accomplish their missions. Accordingly, we recommended that DHS ensure that human capital and financial resources are provided to establish a fully functional and effective program office, and that the department define program office positions, roles, and responsibilities. We also recommended that DHS develop and implement a human capital strategy for the program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. DHS has implemented our recommendation that it define program office positions, roles, and responsibilities, and it has partially completed our two other people-related recommendations. It has filled most of its planned government positions and is on the way to filling the rest, and it has filled all of its planned contractor positions. However, the program completed a workforce analysis in February 2005 and requested additional positions based on the results. Securing these necessary resources will be a continuing challenge. In addition, as we reported in February 2005, the program office, working with the Office of Personnel Management, developed a draft human capital plan that employed widely accepted human capital planning tools and principles (for example, it included an action plan that identified activities, their proposed completion dates, and the office responsible for the action). In addition, the program office had completed some of the activities in the plan. Since then, the program office has finalized the human capital plan, completed more activities, and formulated plans to complete others (for example, according to the program office, it has completed an analysis of its workforce to determine diversity trends, retirement and attrition rates, and mission-critical and leadership competency gaps, and it has plans to complete an analysis of workforce data to maintain strategic focus on preserving the skills, knowledge, and leadership abilities required for the US-VISIT program’s success). Program officials also said that the reason they have not completed several activities in the plan is that these activities are related to the department’s new human capital initiative, MAXHR. Because this initiative is to include the development of departmentwide competencies, program officials told us that it could potentially affect ongoing program activities related to competencies. As a result, these officials said that they are coordinating these activities closely with the department as it develops and implements this new initiative, which is currently being reviewed by the DHS Deputy Secretary. DHS’s progress in implementing our human capital recommendations should help ensure that it has sufficient staff with the right skills and abilities to successfully execute the program. Having such staff has been and will be particularly important in light of the program’s more limited progress to date in establishing program management process capabilities. DHS’s progress in establishing effective processes governing how program managers and staff are to perform their respective roles and responsibilities has generally been slow. In our experience, weak process management controls typically result in programs falling short of expectations. From September 2003, we have made numerous recommendations aimed at enabling the program to strengthen its process controls in such areas as acquisition management, test management, risk management, configuration management, capacity management, security, privacy, and independent verification and validation (IV&V). DHS has not yet completed the implementation of any of our recommendations in these areas, with one exception. It has ensured that the program office’s IV&V contractor was independent of the products and processes that it was verifying and validating, as we recommended. In July 2005, the program office issued a new contract for IV&V services after following steps to ensure the contractor’s independence (for example, IV&V contract bidders were to be independent of the development and integration contractors and are prohibited from soliciting, proposing, or being awarded work for the program other than IV&V services). If effectively implemented, these steps should adequately ensure that verification and validation activities are performed in an objective manner, and thus should provide valuable assistance to program managers and decision makers. In the other management areas, DHS has partially completed or has only begun to address our recommendations, and more remains to be done. For example, DHS has not completed the development and implementation of key acquisition controls. We reported in September 2003 that the program office had not defined key acquisition management controls to support the acquisition of US- VISIT, increasing the risk that the program would not satisfy system requirements or meet benefit expectations on time and within budget. Accordingly, we recommended that DHS develop and implement a plan for satisfying key acquisition management controls in accordance with best practices. The program office has recently taken steps to lay the foundation for establishing key acquisition management controls. For example, it has developed a process improvement plan to define and implement these controls that includes a governance structure for overseeing improvement activities. In addition, the program office has recently completed a self-assessment of its acquisition process maturity, and it plans to use the assessment results to establish a baseline of its acquisition process maturity as a benchmark for improvement. According to program officials, the assessment included key process areas that are generally consistent with the process areas cited in our recommendation. The program has ranked these process areas and plans to focus on those with highest priority. (Some of these high-priority process areas are also areas in which we have made recommendations, such as configuration management and risk management.) The improvement plan is currently being updated to reflect the results of the baseline assessment and to include a work breakdown structure, process prioritization, and resource estimates. According to a program official, the goal is to conduct a formal appraisal to assess the capability level of some or all of the high-priority process areas by October 2006. These recent steps provide a foundation for progress, but fully and effectively implementing key acquisition management controls takes considerable time, and DHS is still in the early stages of the process. Therefore, it is important that these improvement efforts stay on track. Until these controls are effectively implemented, US-VISIT will be at risk of not delivering promised capabilities on time and within budget. Another management area of high importance to a complex program like US-VISIT is test management. The purpose of system testing is to identify and correct system defects before the system is deployed. To be effective, testing activities should be planned and implemented in a structured and disciplined fashion. Among other things, this includes developing effective test plans to guide the testing activities and ensuring that test plans are developed and approved before test execution. In this area also, DHS’s progress responding to our recommendation has been limited. We reported in May 2004, and again in February 2005, that system testing was not based on well-defined test plans, and thus the quality of testing being performed was at risk. Because DHS test plans were not sufficiently well-defined to be effective, we recommended that before testing begins, DHS develop and approve test plans that meet the criteria that relevant systems development guidance prescribes for effective test plans: namely, that they (1) specify the test environment; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between the test cases and the requirements to be verified by the testing. About 20 months later, the quality of the system test plans, and thus system testing, is still a challenge. To the program’s credit, the test plans for the Proof of Concept for Increment 2C, dated June 28, 2005 (which introduces RF technology to automatically record the entry and exit of covered individuals), satisfied part of our recommendation. Specifically, the test plan for this increment was approved on June 30, 2005, before testing began (according to program officials, it began on July 5, 2005). Further, the test plan described, for example, the scope, complexity, and completeness of the test environment; it described the tests to be performed, including a high-level description of controls, inputs, and outputs; and it identified the test procedures to be performed. However, the test plan did not adequately trace between test cases and the requirements to be verified by testing. For example, about 70 percent of the requirements that we analyzed did not have specific references to test cases. Further, we identified traceability inconsistencies, such as one requirement that was mapped to over 50 test cases, even though none of the 50 cases referenced the requirement. Time and resource constraints were identified as the reasons that test plans have not been complete. Specifically, program officials stated that milestones do not permit existing testing/quality personnel the time required to adequately review testing documents. According to these officials, even when the start of testing activities is delayed because, for example, requirements definition or product development takes longer than anticipated, testing milestones are not extended. Without complete test plans, the program does not have adequate assurance that the system is being fully tested, and thus unnecessarily assumes the risk of system defects not being detected and addressed before the system is deployed. This means that the system may not perform as intended when deployed, and defects will not be addressed until late in the systems development cycle, when they are more difficult and time-consuming to fix. This has in fact happened already: postdeployment system interface problems surfaced for Increment 1, and manual work-arounds had to be implemented after the system was deployed. Until process management weaknesses such as these are addressed, the program will continue to be overly dependent on the exceptional performance of individuals to produce results. Such dependence increases the risk of the US-VISIT program falling short of expectations. To better ensure that US-VISIT and DHS meet expectations, we made recommendations related to measuring and disclosing progress against program commitments. Thus far, such performance and accountability mechanisms have yet to be fully established. Measurements of the operational performance of the system are necessary to ensure that the system adequately supports mission operations, and measurements of program progress and outcomes are important for demonstrating that the program is on track and is producing results. Without such measurements, program performance and accountability can suffer. As we reported in September 2003, the operational performance of initial system increments was largely dependent on the performance of existing systems that were to be interfaced to create these increments. For example, we said that the performance of an increment would be constrained by the availability and downtime of the existing systems, some of which had known problems in these areas. Accordingly, we recommended that DHS define performance standards for each increment that are measurable and that reflect the limitations imposed by this reliance on existing systems. In February 2005, we reported that several technical performance standards for increments 1 and 2B had been defined, but that it was not clear that these standards reflected the limitations imposed by the reliance on existing systems. Since then, the program office has defined certain other technical performance standards for the next increment (Increment 2C, Phase 1), including standards for availability. Consistent with what we reported, the functional requirements document states that these performance standards are largely dependent upon those of the current systems, and for system availability, it sets an aggregated availability standard for Increment 2C components. However, the document does not contain sufficient information for a determination of whether these performance standards actually reflect the limitations imposed by reliance on existing systems. Unless the program defines performance standards that do this, it will be unable to identify and effectively address performance shortfalls. Similarly, as we observed in June 2003, to permit meaningful program oversight, it is important that expenditure plans describe how well DHS is progressing against the commitments made in prior expenditure plans. The expenditure plan for fiscal year 2005 (the fourth US-VISIT expenditure plan) does not describe progress against commitments made in the previous plans. For example, according to the fiscal year 2004 plan, US-VISIT was to analyze, field test, and begin deploying alternative approaches for capturing biometrics during the exit process. However, according to the fiscal year 2005 plan, US-VISIT was to expand its exit pilot sites during the summer and fall of 2004, and it would not deploy the exit solution until fiscal year 2005. The plan does not explain the reason for this change from its previous commitment nor its potential impact. Nor does it describe the status of the exit pilot testing or deployment, such as whether the program has met its target schedule or whether the schedule has slipped. Additionally, the fiscal year 2004 plan stated that $45 million in fiscal year 2004 was to be used for exit activities. However, in the fiscal year 2005 plan, the figure for exit activities was $73 million in fiscal year 2004 funds. The plan does not highlight this difference or address the reason for the change in amounts. Also, although the fiscal year 2005 expenditure plan includes benefits stated in the fiscal year 2004 plan, it does not describe progress in addressing those benefits, even though in the earlier plan, US-VISIT stated that it was developing metrics for measuring the projected benefits, including baselines by which progress could be assessed. The fiscal year 2005 plan again states that performance measures are under development. Figure 1 provides our analysis of the commitments made in the fiscal year 2003 and 2004 plans, compared with progress reported and planned in February 2005. The deployment of an exit capability, an important aspect of the program that was to result from the exit pilots shown in the figure, further illustrates missed commitments that need to be reflected in the next expenditure plan. In the fiscal year 2005 expenditure plan, the program committed to deploying an exit capability to air and sea ports of entry by September 30, 2005. Although US-VISIT has completed its evaluation of exit solutions at 11 pilot sites (9 airports and 2 seaports), no decision has yet been made on when an exit capability will be deployed. According to program officials, deployment to further sites would take at least 6 months from the time of the decision. This means that the program office will not meet its commitment. Another accountability mechanism that we recommended in May 2004 is for the program to develop a plan, including explicit tasks and milestones, for implementing all our open recommendations, and report on progress, including reasons for delays, both to department leadership (the DHS Secretary and Under Secretary) in periodic reports and to the Congress in all future expenditure plans. The department has taken action to address this recommendation, but the initial report does not disclose enough information for a complete assessment of progress. The program office did assign responsibility to specific individuals for preparing the implementation plan, and it developed a report identifying the person responsible for each recommendation and summarizing progress. This report was provided for the first time to the DHS Deputy Secretary on October 3, 2005, and the program office plans to forward subsequent reports every 6 months. However, some of the report’s progress descriptions are inconsistent with our assessment. For example, the report states that the impact of Increment 2B on workforce levels and facilities at land ports of entry has been fully assessed. However, as mentioned earlier, evaluation conditions were not always held constant—that is, fewer workstations were used to process travelers in establishing the baseline processing times at two of the ports of entry than were used during the pilot evaluations. In addition, the report does not specifically describe progress against most of our recommendations. For example, we recommended that the program reassess plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions. With regard to the exit evaluation, the report states that the program office has completed exit testing and has forwarded the exit evaluation report to the Deputy Secretary for a decision. However, it does not state whether the program office had expanded the scope or time frames of the pilot. In closing, I would emphasize that the program has met many of the demanding requirements in law for deployment of an entry-exit system, owing, in large part, to the hard work and dedication of the program office and its contractors, as well as the close oversight and direction of the House and Senate Appropriations Committees. Nevertheless, core capabilities, such as exit, have yet to be established and implemented, and fundamental questions about the program’s fit within the larger homeland security context and its return on investment remain unanswered. Moreover, the program is overdue in establishing the means to effectively manage the delivery of future capabilities. The longer the program proceeds without these, the greater the risk that the program will not meet its commitments. Measuring and disclosing the extent to which these commitments are being met are also essential to holding the department accountable, and thus are an integral aspect of effective program management. Our recommendations provide a comprehensive framework for addressing each of these important areas and thus ensuring that the program as defined is the right solution, that delivery of this solution is being managed in the right way, and that accountability for both is in place. We look forward to continuing to work constructively with the program to better ensure the program’s success. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the committee may have at this time. If you should have any questions about this testimony, please contact Randolph C. Hite at (202) 512-3439 or [email protected]. Other major contributors to this testimony included Tonia Brown, Barbara Collier, Deborah Davis, James Houtz, Scott Pettis, and Dan Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Homeland Security (DHS) has established a program--the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT)--to collect, maintain, and share information, including biometric identifiers, on selected foreign nationals who enter and exit the United States. US-VISIT uses these biometric identifiers (digital fingerscans and photographs) to screen persons against watch lists and to verify that a visitor is the person who was issued a visa or other travel document. Visitors are also to confirm their departure by having their visas or passports scanned and undergoing fingerscanning at selected air and sea ports of entry. GAO was asked to testify on (1) the status of US-VISIT and (2) DHS progress in implementing recommendations that GAO made as part of its prior reviews of US-VISIT annual expenditure plans. The testimony is based on GAO's prior reports as well as ongoing work for the House Committee on Homeland Security. GAO's recommendations are directed at helping the department improve its capabilities to deliver US-VISIT capability and benefit expectations on time and within budget. According to DHS, the recommendations have made US-VISIT a stronger program. The US-VISIT program has met a number of demanding requirements that were mandated in legislation. A pre-entry screening capability is in place in overseas visa issuance offices, and an entry identification capability is operating at 115 airports, 14 seaports, and 154 land ports of entry. This has been accomplished during a period of DHS-wide change, and has resulted in preventing criminal aliens from entering the country and potentially deterring others from even attempting to do so. Nevertheless, DHS has more to do to implement GAO recommendations aimed at better ensuring that US-VISIT is maximizing its potential for success and holding itself accountable for results. DHS has taken steps to address those GAO recommendations intended to ensure that US-VISIT as defined is the "right thing." For example, it is clarifying the strategic context within which US-VISIT is to operate, having drafted a strategic plan to show how US-VISIT is aligned with DHS's mission goals and operations and to provide an overall vision for immigration and border management. However, the plan has yet to be approved, causing its integration with other departmentwide border security initiatives to remain unclear. In addition, the department has analyzed the program's costs, benefits, and risks, but its analyses do not yet demonstrate that the program is producing or will produce mission value commensurate with expected costs and risks. In particular, the department's return-on-investment analyses for exit options do not demonstrate that these solutions will be cost-effective. DHS has also taken steps to address those GAO recommendations aimed at ensuring that the program is executed in the "right way." The department has made good progress in establishing the program's human capital capabilities, which should help ensure that it has sufficient staff with the necessary skills and abilities. This is particularly important in light of the program's more limited progress in establishing capabilities in certain program management process areas, such as test management. For example, a test plan used in a recent system acceptance test did not adequately trace between test cases and the requirements to be verified by testing. Incomplete test plans reduce assurance that systems will perform as intended once they are deployed. DHS also has begun addressing GAO's recommendations to establish accountability for program performance and results, but more needs to be done. For example, DHS's expenditure plans have not described progress against commitments made in previous plans. Unless performance against commitments is measured and disclosed, the ability to manage and oversee the program will suffer. The longer the program proceeds without fully addressing GAO's recommendations, the greater the risk that it will not deliver promised capabilities and benefits on time and within budget. |
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. The fiscal year 2002 performance plan is the fourth of these annual plans under GPRA. The fiscal year 2000 performance report is the second of these annual reports under GPRA. The issuance of the agencies’ performance reports, due by March 31, 2001, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. USDA is one of the nation’s largest federal agencies, employing over 110,000 people and managing a budget of over $78 billion. Its agencies and offices are responsible for operating more than 200 programs. These programs support the profitability of farming, promote domestic agricultural markets and the export of food and farm products, provide food assistance for the needy, ensure the safety of the nation’s food supply, manage the national forests, protect the environment, conduct biotechnological and other agricultural research, and improve the well being of rural America. This section discusses our analysis of USDA’s performance in achieving the selected key outcomes and the strategies the agency has in place to achieve these outcomes, particularly for strategic human capital management and information technology. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which the department provided assurance that its reported performance information is credible. USDA’s fiscal year 2000 performance report, which was issued in March 2001, indicated that the department continued to make some progress toward achieving this outcome. For example, USDA reported that it met its goals for stabilizing peanut and tobacco prices and maintaining the economic viability of peanut and tobacco producers. However, it is difficult to assess USDA’s progress because the department did not provide an overall evaluation of this outcome in its report. According to the performance report, USDA met about 72 percent of the performance goals related to this outcome, less than last year when USDA reported it met over 80 percent of its goals. USDA did not select the outcome of providing an adequate and reasonably priced food supply as a key departmental strategic goal in its fiscal year 2002 performance plan, which was issued in June 2001. Some of USDA’s efforts to achieve this outcome are discussed under the top ranked departmental strategic goal of expanding economic and trade opportunities for U.S. agricultural producers and a USDA official stated that this outcome continues to be important for the department. USDA’s discussion of this strategic goal stated that farming and ranching is being transformed by changes in biological and information technology, environmental and conservation concerns, greater threats from pests and diseases spreading across continents, natural disasters and the industrialization of agriculture, and globalization of markets. Under this goal, USDA chose as its first objective to provide an effective safety net and to promote a strong, sustainable U.S. farm economy. USDA explained that if it is to achieve its goal of promoting a strong farm economy that is less dependent of government support, then it must also place a heavy emphasis on helping farmers proactively manage the risks inherent in agriculture and improve farmers income. USDA’s second objective under this strategic goal is to expand export markets—USDA illustrated the opportunity for exports by estimating that 96 percent of American agriculture’s potential customers reside outside the United States. Some of the performance goals presented for this strategic goal are to improve farmers’ incomes, reduce pest and disease outbreaks, and expand international sales opportunities. USDA reported progress in fiscal year 2000 that was similar to its performance last year in that it met some of its goals and indicators for this outcome. USDA stated that it exceeded its targets for two key goals. The gross trade value of markets created, expanded, or retained annually due to market access activities reached $4.35 billion, significantly higher than its $2 billion target. USDA attributed $2 billion of this gain to negotiations on China’s accession to the World Trade Organization in fiscal year 2000. Similarly, annual sales, which were reported by U.S. exporters from on-site sales at international trade shows, reached $367 million in fiscal year 2000, compared to USDA’s target of $250 million. Despite these successes, USDA fell short in meeting other goals. The department reported $837 million in U.S. agricultural exports resulted from the implementation of trade agreements under the World Trade Organization, below its target of $2 billion. It also reported that the total value of U.S. agricultural exports supported by its export credit guarantee programs reached $3.1 billion, falling short of its $3.8 billion target. USDA uses a questionable methodology for measuring the success of its efforts to expand and maintain global markets for U.S. agricultural products. USDA’s goals and indicators emphasize growth in the U.S. share of the global agricultural market—measured by changes in the dollar value of exports resulting from the implementation of trade agreements, market access enhancements, sales from annual trade shows, and agricultural exports. Yet, the dollar value of exports is subject to powerful external variables that transcend USDA’s authority and ability to affect change in international trade. These variables include exchange rates, government policies, global and national economic conditions, climactic changes, and numerous other factors over which USDA has no control or strategies to address. For example, the decrease in the value and volume of U.S. agricultural exports over the last several years is generally recognized by economists, government officials, and private sector representatives to be the result of deteriorating economic conditions, particularly in the Asian market, over which USDA has no control. USDA’s Economic Research Service has consistently held that U.S. agricultural export performance results more from market forces, which include multiple variables beyond the control of USDA, than from the actions of the U.S. government to expand international market opportunities. Along with other research institutions, it has confirmed that the decline in the value of U.S. agricultural exports from $60 billion in fiscal year 1996 to $50.9 billion in fiscal year 2000 was not attributable to U.S. government trade policies, programs, and activities. It further observed that USDA programs typically have a limited effect on the dollar value of U.S. exports and market share. We have previously raised questions about the extent of the relationship between USDA’s export policies and programs and increased exports. USDA’s fiscal year 2002 plan is based on the assumption that government policies, programs, and activities have a significant influence on the U.S. share of the global agricultural market. USDA has set a goal to increase exports by $14 billion by fiscal year 2010, or about 22 percent of the global market. This level would return the United States to the same global market share it held in the early 1990s. USDA’s plan is consistent with the assumption that the government’s impact is enhanced when the government works with the private sector to create a facilitative environment to expand sales of agricultural products abroad. USDA’s strategies are to include a long-range integrated marketing plan, which would provide a generalized framework that goes beyond the traditional narrow and short-term programmatic and reactive export oriented approaches. Among its goals are those for (1) developing a long-range marketing plan that enlists USDA’s network of domestic and foreign field offices in an effort to assist U.S. producers in capturing new market opportunities, (2) partnering with private U.S. market development groups to leverage resources aimed at expanding market opportunities abroad for U.S. food and agricultural products, (3) expanding U.S. access to foreign markets through active participation in the World Trade Organization and international trade forums, and (4) continuing to monitor international trade agreements and negotiating new agreements to open overseas markets to U.S. food and agricultural products. However, what is not yet spelled out are the key elements of the integrated marketing plan that will move beyond a generalized concept to the reality of specific actions that will lead to success. Among the elements that could be further addressed would be the organizational structure, the human capital and technological resources, and the operational concepts and methods that will actually enable USDA to meet its global marketing objectives. USDA’s Foreign Agricultural Service said that its plans are necessarily generalized at this point in time and should be considered as their first steps in developing an integrated marketing plan. The Service also said that it would be instituting quarterly reporting to track progress. In addition, the Service disagreed with our views about its departmental level strategic performance goal to affect U.S. market share, and said that it believed it had selected the ultimate measure of change for international agricultural markets. However, as previously discussed, we disagree with the selection of this goal because USDA’s activities have little influence on the overall level of international market shares. Since the GPRA was designed to lead to better insights into the performance of government, USDA will need to adopt a realistic departmental performance goal to meet this purpose. According to its performance report, USDA reported continued progress toward this outcome and met about 80 percent of its goals. USDA’s performance exceeded that of fiscal year 1999. For example, the department reported meeting its goals for distributing food nutrition education information to low-income Americans, for increasing the number of schools that meet USDA’s dietary guidelines, and for improving the effectiveness and efficiency of commodity acquisition and distribution to support domestic and international food assistance programs. Some of the goals do not have specific performance targets, so it is not always clear what USDA is actually accomplishing. For example, USDA determined that it is meeting its goal of improving the nutritional status of Americans by such actions as distributing revised dietary guidelines and by promoting media coverage, and observing seminar attendance and web-page usage related to improved nutrition and diet. These measures of performance do not tell us whether USDA’s actions are improving Americans’ nutritional status. USDA’s fiscal year 2002 departmental performance plan contains many general strategies for achieving its goals and measures. For example, one general strategy called for reallocating funds from areas with excess funds to areas with high demand for the Special Supplemental Nutrition Program for Women, Infants, and Children. However, some of the general strategies make it difficult to assess USDA’s progress. For example, USDA’s goal to improve food security for children and low-income individuals calls for expanding program access to the needy—and the plan’s strategies for doing this involves “effectively delivering assistance” and “continuing efforts” to ensure that the Food Stamp Program is accessible. Such strategies provide little insight into the specific actions USDA intends to take to achieve its goals. In addition, at the time of our review, USDA’s Food and Nutrition Service, the agency primarily responsible for this outcome, had yet to draft a performance plan for fiscal year 2002. The detailed goals and strategies that the agency level plan would contain are needed to support USDA’s departmental plan. The Acting Administrator of the Food and Nutrition Service reported that the agency is assembling a policy team and will issue a draft performance plan after the team is selected. According to its performance report, USDA met or exceeded nearly all of its fiscal year 2000 performance goals for ensuring a safe and wholesome food supply. USDA stated that it met its goals for key areas, such as the percentage of federally inspected meat and poultry slaughter and/or processing plants that had implemented the basic hazard analysis and critical control points (HACCP) requirements. GAO issued a report on this subject in 1997. USDA also reported that it exceeded its goal for the number of reviews it conducted of foreign meat and poultry food safety programs to ensure their compliance with U.S. safety standards. GAO also issued a report on this subject in 1998. When performance goals were not met, USDA generally provided specific explanations, including describing external factors when applicable, for not achieving the performance goals. For example, USDA reported that it fell short of meeting its goal for deploying 607 computers to state inspection programs because 4 states did not have the funding available to meet their 50-percent share of the computers’ costs. In another example, USDA did not meet its goal to perform 68,000 laboratory tests, falling short of its target by 8,000 tests. USDA did not provide any additional strategies for achieving this goal in the following fiscal year, but it stated that it believed many of the difficulties in meeting the goal have been alleviated by the implementation of the new HACCP system. USDA’s fiscal year 2002 performance plan describes several strategies to ensure a safe and wholesome food supply. Such strategies include (1) strengthening laboratory and risk assessment capabilities, (2) implementing a HACCP system for eggs, and (3) strengthening its foreign food safety program efforts. These strategies generally provided a clear description of USDA’s approach for reaching its performance goals. For example, USDA described a strategy that seeks to improve its foreign food safety program review efforts by intensifying reviews of animal feeds, animal identification, and process control systems in countries exporting meat and poultry products to the United States. However, the strategies did not show how USDA plans to address and overcome the fundamental problem it faces in this area—the current food safety system is fragmented with as many as 12 different federal agencies administering over 35 laws regarding food safety. USDA’s plan states that the creation of a single federal food safety agency, as previously recommended by us, extends beyond the legal scope of any one federal agency. We have maintained that until this fragmented system is replaced with a risk-based single food agency, the U.S. food safety system will continue to under perform. USDA pointed out that it does not have the authority to merge with other federal agencies and form a single food safety agency. (See app. I.) According to its performance report, USDA met or exceeded many of its fiscal year 2000 goals and made progress toward reducing food stamp fraud and error. The department, for example, reported exceeding its goal for payment accuracy rate in the delivery of Food Stamp Program benefits and stated that it would support continued improvements by seeking opportunities to simplify program rules—a recommendation made by us in a recent report on reducing payment errors. It also reported collecting about $219 million in overpayments to recipients in fiscal year 2000, which exceeded its original target of collecting about $194 million. In some instances, USDA fell short of meeting its goals for this outcome. For example, USDA did not meet its goal for increasing the percentage of debt owed by retailers who were delinquent on their food stamp payments that was referred to Treasury, and it narrowly missed its goal for the number of retailers sanctioned for not meeting regulatory requirements. In those instances when goals were not met, USDA generally provided specific explanations for not achieving them. For example, the department reported that it did not meet its goal for referring to the Treasury Department cases of food stamp retailers with delinquent debts for collection because it did not submit cases in a timely manner and because of shortcomings in the processing of such referrals. USDA did not base its fiscal year 2000 performance report assessments on actual performance data in some cases. For example, for two performance goals—maintain payment accuracy in the delivery of Food Stamp Program benefits and the number of states qualifying for enhanced funding based on high payment accuracy—the department reported progress from fiscal year 1999, and it stated that it would meet its fiscal year 2000 performance goals based on “early indications” and planned activities. USDA also recognized that actual data would be available 3 months after the performance report was issued, which represents an improvement in data reporting. Nevertheless, the absence of timely performance data makes it difficult for USDA and others to annually assess performance and determine if changes in strategies are needed. USDA’s fiscal year 2002 departmental performance plan contained several strategies for reducing food stamp fraud and error. USDA stated that it intended to continue to improve the accuracy and consistency of its quality control system and support state efforts to improve food stamp benefit accuracy through technical assistance and by using the best practices for information-sharing. However, the departmental plan did not have specific strategies to demonstrate how USDA would achieve its strategic goals and objectives. In some instances, a discussion of goals, objectives, and strategies directly related to this key outcome were not included. For example, the plan did not include a discussion of how it would deal with retail stores that violate program requirements. A recent Food and Nutrition Service study estimated that stores each year illegally provided cash for benefits (trafficking of benefits) totaling about $660 million. USDA’s departmental plan also did not specifically discuss the Food and Nutrition Service’s targets or measures for reducing trafficking in food stamps, and it does not contain details on the strategies to be used to reduce fraud and error in the Food Stamp Program. The details of these strategies may be included in the Food and Nutrition Service’s agency level performance plan for fiscal year 2002, which has not yet been prepared. Additionally, we have identified efforts to reduce fraud and error in the food stamp program as a major management challenge. (See app. I.) For the selected key outcomes, this section describes major improvements or remaining weaknesses in USDA’s (1) fiscal year 2000 performance report in comparison with its fiscal year 1999 report, and (2) fiscal year 2002 performance plan in comparison with its fiscal year 2001 plan. It also discusses the degree to which the agency’s fiscal year 2000 report and fiscal year 2002 plan addresses concerns and recommendations by the Congress, GAO, USDA’s OIG and others. USDA’s fiscal year 2000 performance report presentation has remained largely unchanged compared with the prior year’s report. Specifically, the report continued to be an agency-by-agency discussion of its progress without an overview presenting a picture of the department’s overall performance. As discussed previously, the fiscal year 2000 performance report has limitations such as its reliance on narrative measures that track agency actions but that do not provide information about the impacts of the agency’s performance. There are also areas where the data is limited and of questionable reliability—USDA has reported that the vast scale and complexity of its programs present major management challenges in terms of the availability of accurate, credible, and timely performance data. For example: (1) the Foreign Agricultural Service reported that it has limited resources for tracking issues related to the World Trade Organization and barriers in foreign markets leading to errors and limitations in data verification; (2) USDA’s estimates of the populations that are participating in food stamp and other nutrition assistance programs are generally not available in time for preparing its annual performance reports; (3) USDA has relied on data about school food services that is collected informally and without standardized procedures because of opposition to the collection of this data; and (4) USDA reported that its data on agricultural producers’ awareness of risk management alternatives had not been collected consistently from state to state. In addition, the fiscal year 2000 performance report varied from providing a detailed discussion of USDA’s data verification and validation efforts, to little or no information about its data accuracy. In many cases, USDA did not provide information on the steps that were taken to verify and validate the data. For example, concerning the performance goal to eradicate a common animal disease, the report simply stated that staff members are responsible for ensuring the reliability and accuracy of the data. Also, UDSA did not report on the reliability of the information reported by the Cooperative State Research, Education, and Extension Service, which relies on the accomplishments and results reported by the universities receiving its research funds. USDA developed a new departmental plan for fiscal year 2002 that is significantly different than its 2001 plan. The fiscal year 2002 plan provided, for the first time, a departmentwide approach to performance management. This streamlined presentation consolidated the more than 1,700 agency specific performance goals and measures it presented in 2001 into 5 departmental strategic goals, 56 annual performance goals, and 79 measures for fiscal year 2002. The departmental strategic goals USDA selected were as follows: (1) expand economic and trade opportunities for U.S. agricultural producers; (2) promote health by providing access to safe, affordable, and nutritious food; (3) maintain and enhance the nation’s natural resources and environment; (4) enhance the capacity of all rural residents, communities, and businesses to prosper; and (5) operate an efficient, effective, and discrimination-free organization. The new departmental plan is supported by agency-level annual performance plans that offer more detailed information on evolving strategies, priorities, and resource needs. We found USDA’s new plan to be a work-in-progress, as discussed throughout this report. USDA did not consistently provide the detailed strategies that were needed for achieving its departmental goals. Of the 56 annual performance goals in the departmental plan, 33 goals do not contain overall performance targets against which to measure overall progress. For each of these 33 goals, USDA provided various performance indicators, some of which contain performance targets that are representative measures of progress. Also, there were goals that were substantially affected by external factors beyond the scope of USDA’s activities. Examples include the goals to (1) grow the U.S share of the global agriculture market, even though USDA’s programs have a limited effect on the total dollar value of U.S. exports, and (2) enhance the capacity of all rural residents, communities, and businesses to prosper, when the scope of USDA’s rural assistance programs is not designed to provide for a comprehensive federal effort in this area. Moreover, in the Secretary’s message transmitting the fiscal year 2002 plan, the Secretary stated that she had not thoroughly reviewed the new strategic plan, did not have a full leadership team in place, and recognized that more needed to be done. The Secretary also stated that once USDA’s full leadership team is in place, it will be working to conduct a top-to-bottom review of the department’s programs, and will develop new strategic and annual performance goals to carry out this administration’s priorities. Additionally, in response to our prior GPRA reviews, USDA included two new sections in its 2002 performance plan—one that includes a discussion of data verification and validation by each performance goal and one that recognizes major management challenges identified by GAO. The discussion of USDA’s data and its sources is a valuable addition to USDA’s plan because it provides a more consistent picture of the data USDA uses, the steps USDA takes to verify its data, and the limitations that need to be taken into account. GAO has identified two governmentwide high-risk management challenges: strategic human capital management and information security. Regarding human capital management, USDA’s plan contains a key outcome—to ensure USDA has a skilled, satisfied workforce and strong prospects for retention of its best employees. The plan recognized emerging skill gaps, high retirement eligibility rates, and the need for staff to shift to a greater use of technology as departmental strategic issues. However, USDA has identified only one human capital performance measure—an employee satisfaction survey—which would not measure the closing of skill gaps, the retention of critical employees, or changes related to the use of new technology. Furthermore, the extent of the discussion of human capital strategies in USDA’s individual agency plans varies. For example, the plans of the Farm Service Agency and the Food Safety and Inspection Service did not discuss human capital issues, and the Food and Nutrition Service has not completed a plan. With respect to information security, we found that the Chief Information Officer’s performance report did not explain its progress in implementing its August 1999 action plan for improving departmentwide information security, or time frames and milestones for doing so. In addition, USDA’s performance plan did not have departmental goals and measures related to this important area. In commenting on a draft of this report, USDA officials stated that progress had been made in implementing their August 1999 action plan to strengthen information security and agreed that USDA’s annual performance plan could be improved by including information security performance goals and measures. GAO has also identified 10 major management challenges facing USDA. USDA’s performance report discussed the agency’s progress in resolving many of its challenges, and its performance plan had (1) goals and measures that were directly related to seven of the challenges, (2) goals and measures that were indirectly applicable to two of the challenges, and (3) no goals and measures related to one of the challenges. Appendix I provides detailed information on how USDA addressed these challenges and high-risk areas as identified by both GAO and the agency’s Inspector General. However, USDA did not recognize or address some of the management challenges identified by its own Inspector General because according to USDA officials, the Office of the Inspector General did not send a copy of its letter to the affected USDA agencies. USDA’s fiscal year 2000 performance report and fiscal year 2002 performance plan have the potential for focusing the department’s missions, but these efforts are compromised in a number of areas. USDA’s goals and measures are too general to give insight into the actual achievements that USDA is striving to make. In particular, it is difficult to assess USDA’s progress when it uses unrealistic goals to achieve strategic outcomes and when it uses untimely data that has not been consistently verified. In two particular areas—strategic human capital management and information security—the process of measuring USDA’s performance could be improved by including goals and measures in USDA’s annual performance plan. Finally, USDA missed the opportunity to develop strategies and plans to respond to the major management challenges identified by the OIG. To improve USDA’s performance reporting and planning, we recommend that the Secretary of Agriculture (1) set priorities for improving the timeliness of the data that USDA is using for measuring its performance; (2) improve USDA’s performance report by including more consistent discussions of data verification and validation; (3) better match the department’s goals and outcomes with its capabilities for expanding and maintaining global market opportunities; (4) include performance goals and measures for strategic human capital management issues and information security issues in the departmental performance plan; (5) make reducing food stamp trafficking an annual performance goal in USDA’s plan; and (6) address and include the Office of Inspector General’s major management challenges in future performance plans. To facilitate our last recommendation, we also recommend that the Inspector General work with the Chief Financial Officer and USDA agency officials in identifying and including major management challenges in USDA’s performance plans. We provided USDA with a draft of this report for its review and comment. USDA chose to meet with us to provide oral comments, and we met with the Acting Chief Financial Officer and other officials from the department on August 13, 2001, to discuss these comments. The Acting Chief Financial Officer said that the department generally agreed with the information presented in the draft report. USDA officials also provided the following comments. Regarding major management challenges, USDA agency officials questioned whether there is a requirement for USDA to report on major management challenges as part of its performance plan and to include related performance goals. Our review, as requested, included an assessment of USDA’s progress in addressing its major management challenges. In addition, OMB Circular A-11 states that federal agencies should include a discussion of major management challenges in their annual performance plans and present performance goals for these challenges. USDA’s OIG disagreed with our recommendation calling for the OIG to distribute future OIG letters on major management challenges to affected USDA agencies. The OIG commented that its audit reports already identify management challenges and that these are discussed with the affected agencies. The OIG also stated that its letter to congressional requesters identifying major management challenges was provided informally to the department and that the OIG is required by Public Law 106-531 to report on the most serious management challenges in USDA’s annual report to the president and the Congress. We are well aware that the OIG identifies management challenges in audit reports and reports separately on these challenges. Nevertheless, as stated in our draft report, our recommendation is directed at facilitating the inclusion and discussion of the OIG identified major management challenges in USDA’s annual performance plan. The OIG’s reporting of the management challenges to congressional requesters in December 2000 appeared to us to be a document that could have served as a timely starting point for the major management challenge section of USDA’s departmental annual performance plan. We continue to believe that the OIG should play a role in facilitating the major management challenge section of the departmental performance plan, and have modified our recommendation to directly call for the OIG to participate in the development of this section of USDA’s plan. The Foreign Agricultural Service disagreed with our recommendation to better match the department’s goals and outcomes with its capabilities for expanding global market opportunities. It stated that the measure it is using—global market share—is the ultimate performance measure for describing overall changes in international markets and that the Congress is interested in U.S. international market share. However, in discussing this concern, the Service itself acknowledged that market forces are the principal cause of changes in exports rather than its activities. Therefore, we continue to believe that it would be appropriate to use more realistic goals for performance that are more closely related to the outcomes that USDA activities can achieve. The Service’s agency level performance plan contains some performance indicators that are more limited and better reflect the government’s role in changing export values and market share. The Foreign Agricultural Service also expressed concern that if it were to make detailed information on its strategies available to the public, it could be used by foreign competitors to offset U.S. efforts. Because of the limited federal role in affecting international market share, we believe that more specific information on U.S. role and activities would not compromise U.S. efforts. USDA officials stated that that they had made progress in improving information security and strategic human capital management. Specifically, USDA officials said that progress had been made in implementing their August 1999 action plan to strengthen information security. However, USDA officials recognized that this information, along with information security goals and measures, was generally not included in the department’s performance plan or report and that the process of measuring USDA’s performance would be improved by including it. Also, concerning strategic human capital management, USDA’s performance report and plan did not summarize key actions that USDA officials said have been taken on workforce planning, recruitment, and the retention of employees. USDA will have the opportunity to summarize its progress in these areas in its future performance reports and plans. Department officials also provided technical clarifications, which we made as appropriate. As agreed, our evaluation was generally based on a review of the fiscal year 2000 performance report and the fiscal year 2002 performance plan and the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from the Office of Management and Budget (OMB) for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of USDA’s operations and programs, GAO identification of best practices concerning performance planning and reporting, and our observations on USDA’s other GPRA-related efforts. We also discussed our review with agency officials in the Office of the Chief Financial Officer and with the USDA Office of Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Governmental Affairs Committee as important mission areas for the agency and generally reflect the outcomes for key USDA programs or activities. The major management challenges confronting USDA, including the governmentwide high-risk areas of strategic human capital management and information security, were identified by us in our January 2001 performance and accountability series and high-risk update or were identified by USDA’s Office of Inspector General in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from our other work in assessing the validity, reliability, and timeliness of USDA’s performance data. We conducted our review from April 2001 through August 2001 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary of Agriculture; and the Director of the Office of Management and Budget. Copies will also be made available at to others on request. If you or your staff have any questions, please call me at (202) 512-9692. Key contributors to this report are listed in appendix II. The following table identifies the major management challenges confronting the U.S. Department of Agriculture (USDA), which includes the government-wide high-risk areas of strategic human capital management and information security. USDA has one performance report and a departmentwide plan with supporting plans from the department’s individual agencies. The first column lists the challenges identified by our office and USDA’s Office of Inspector General. The second column discusses what progress, as discussed in its fiscal year 2000 performance report, USDA made in resolving its challenges. The third column discusses the extent to which USDA’s fiscal year 2002 performance plan includes performance goals and measures to address the challenges that we and the USDA’s OIG identified. While USDA’s fiscal year 2000 performance report addressed progress in resolving some of the 17 management challenges, the department did not have goals for the following: strategic human capital management, information security, Forest Service land exchange program, grant agreement administration, grant competitiveness, research funding accountability, and Rural Business Cooperative Service and therefore did not discuss progress in resolving these challenges. USDA’s fiscal year 2002 performance plans provided some goals and measures or strategies for all but five of its management challenges. USDA did not have goals for the management challenges involving the Forest Service land exchange program, grant agreement administration, grant competitiveness, research funding accountability, and Rural Business Cooperative Services. For the remaining 12 major management challenges, its performance plan had (1) goals and measures that were directly related to 8 of the challenges, (2) goals and measures that were indirectly applicable to 3 of the challenges, or (3) had no goals and measures related to 1 of the challenges, but discussed strategies to address it. In commenting on a draft of this report, USDA stated that it made additional progress in resolving its management challenges that had not been reflected in its fiscal year 2000 performance report and fiscal year 2002 performance plan. Erin Barlow, Andrea Brown, Jacqueline Cook, Thomas Cook, Charles Cotton, Angela Davis, Andrew Finkel, Judy Hoovler, Erin Lansburgh, Carla Lewis, Sue Naiberk, Stephen Schwartz, Richard Shargots, Mark Shaw, Ray Smith, Alana Stanfield, Phillip Thomas, and Ronnie Wood. | The Department of Agriculture's (USDA) fiscal year 2000 performance report and fiscal year 2002 performance plan have the potential for focusing the department's missions, but these efforts are compromised in several areas. USDA's goals and measures are too general to give insight into what USDA is actually trying to achieve. It is difficult to assess USDA's progress when it uses unrealistic goals to achieve strategic outcomes and when it uses untimely data that has not been consistently verified. In two areas--strategic human capital management and information security--progress in measuring USDA's performance has been frustrated by the lack of goals and measures for identified issues. Finally, by not sharing information about the major management challenges identified by its own Inspector General, USDA's agencies miss the opportunity to develop strategies and plans to respond to these issues. |
In July 2007, we reported on weaknesses in the Navy’s business case for the Ford-class aircraft carrier and focused mainly on the lead ship, CVN 78. We noted that costs and labor hours were underestimated and critical technologies were immature. Today, all of this has come to pass in the form of cost growth, testing delays, and reduced capability—in other words, less for more. In August 2007, we also observed that in consequence of its optimistic business case, the Navy would likely face the choice of (1) keeping the ship’s construction schedule intact while deferring key knowledge-building events—such as land-based tests of technologies—until later, or (2) slipping the ship’s construction schedule to accommodate technology and other delays. Today, those choices have been made—the ship’s construction schedule has been delayed slightly by a few months, while other events, like land-based tests for critical technologies, have slid by years. The result is a final acquisition phase in which construction and key test events are occurring concurrently, with no margin for error without giving something else up. In its simplest form, a business case requires a balance between the concept selected to satisfy warfighter needs and the resources— technologies, design knowledge, funding, and time—needed to transform the concept into a product, in this case a ship. In a number of reports and assessments since 2007, we have consistently reported on concerns related to technology development, ship cost, construction issues, and overall ship capabilities. Absent a strong business case, the CVN 78 program deviated from its initial promises of cost and capability, which we discuss below. In August 2007, before the Navy awarded a contract to construct the lead ship, we reported on key risks in the program that would impair the Navy’s ability to deliver CVN 78 at cost, on time, and with its planned capabilities (as seen in table 1 below). Specifically, we noted that the Navy’s cost estimate of $10.5 billion and 2 million fewer labor hours made the unprecedented assumption that the CVN 78 would take fewer labor hours than its more mature predecessor—the CVN 77. The shipbuilder’s estimate—22 percent higher in cost was more in line with actual historical experience. Moreover, key technologies, not part of the shipbuilder’s estimates because they would be furnished by the government, were already behind and had absorbed much of their schedule margin. Congress expressed similar concerns about Ford-class carrier costs. The John Warner National Defense Authorization Act for Fiscal Year 2007 included a provision that established (1) a procurement cost cap for CVN 78 of $10.5 billion, plus adjustments for inflation and other factors, and (2) a procurement cost cap for subsequent Ford-class carriers of $8.1 billion each, plus adjustments for inflation and other factors. The legislation in effect required the Navy to seek statutory authority from Congress in the event it determined that adjustments to the cost cap were necessary, and the reason for the adjustments was not one of six factors permitted in the law. The risks we assessed in 2007 have been realized, compounded by additional construction and technical challenges. Several critical technologies, in particular, EMALS, AAG, and DBR, encountered problems in development, which resulted in delays to land-based testing. It was important for these technologies to be thoroughly tested on land so that problems could be discovered and fixes made before installing production systems on the ship. In an effort to meet required installation dates aboard CVN 78, the Navy elected to largely preserve the construction schedule and produce some of these systems prior to demonstrating their maturity in land-based testing. This strategy resulted in significant concurrency between developmental testing and construction, as shown in figure 1 below. The burden of completing technology development now falls during the most expensive phase of ship construction. I view this situation as latent concurrency in that the overlap between technology development, testing, and construction was not planned for or debated when the program was started. Rather, it emerged as a consequence of optimistic planning. Concurrency has been made more acute as the Navy has begun testing the key technologies that are already installed on the ship, even as land based testing continues. Moreover, the timeframes for post-delivery testing, i.e. the period when the ship would demonstrate many of its capabilities, are being compressed by ongoing system delays. This tight test schedule could result in deploying without fully tested systems if the Navy maintains the ship’s ready-to-deploy date in 2020. The issues described above, along with material shortfalls, engineering challenges, and delays developing and installing critical systems, drove inefficient out-of-sequence work, which resulted in significant cost increases. This, in turn, required the Navy to seek approval from Congress to raise the legislative cost cap, which it attributed to construction cost overruns and economic inflation (as shown in figure 2 below). Along with costs, the Navy’s estimates of the number of labor hours required to construct the ship have also increased (see table 2). Recalling that in 2007, the Navy’s estimate was 2 million hours lower than the shipbuilder’s, the current estimate is a big increase. On the other hand, it is more in line with a first-in-class ship like CVN 78; that is to say, it was predictable. To manage remaining program risks, the Navy deferred some construction work and installation of mission-related systems until after ship delivery. Although this strategy may provide a funding reserve in the near term, it still may not be sufficient to cover all potential cost risks. In particular, as we reported in November 2014, the schedule for completing testing of the equipment and systems aboard the ship had become increasingly compressed and continues to lag behind expectations. This is a particularly risky period for CVN 78 as the Navy will need to resolve technical deficiencies discovered through testing—for critical technologies or the ship—concurrent with latter stage ship construction activities, which is generally more complex than much of the work occurring in the earlier stages of construction. Risks to the ship’s capability we identified in our August 2007 report have also been realized. We subsequently found in September 2013 and November 2014 that challenges with technology development are now affecting planned operational capability beyond the ship’s delivery (as shown in table 3). Specifically, CVN 78 will not demonstrate its increased sortie generation rate due to low reliability levels of key aircraft launch and recovery systems before it is ready to deploy to the fleet. Further, required reductions in personnel remain at risk, as immature systems may require more manpower to operate and maintain than expected. Ultimately, these limitations signal a significant compromise to the initially promised capability. The Navy believes that, despite these pressures, it will still be able to achieve the current $12.9 billion congressional cost cap. While this remains to be seen, the Navy’s approach, nevertheless, results in a more expensive, yet less complete and capable ship at delivery than initially planned. Even if the cost cap is met, it will not alter the ultimate cost of the ship. Additional costs will be borne later—outside of CVN 78’s acquisition costs—to account for, for example, reliability shortfalls of key systems. In such cases, the Navy will need to take costly actions to maintain operational performance by adding maintenance personnel and spare parts. Reliability shortfalls, in turn, will drive ship life cycle cost increases related to manning, repairs, and parts sparing. Deferred systems and equipment will at some point be retrofitted back onto the ship. Although increases have already been made to the CVN 79’s cost cap and tradeoffs made to the ship’s scope, it still has an unrealistic business case. In 2013, the Navy requested congressional approval to increase CVN 79’s cost cap from $8.1 billion to $11.5 billion, citing inflation as well as cost increases based on CVN 78’s performance. Since the Ford-class program’s formal system development start in 2004, CVN 79’s planned delivery has been delayed by 4 years and the ship will be ready for deployment 15 months later than expected in 2013. The Navy recently awarded a construction contract for CVN 79 which it believes will allow the program to achieve the current $11.5 billion legislative cost cap. Similar to the lead ship, the business case for CVN 79 is not commensurate with the costs needed to produce an operational ship. By any measure, CVN 79 should cost less than CVN 78, as it will incorporate important lessons learned on construction sequencing and other efficiencies. While it may cost less than its predecessor, CVN 79 is likely to cost more than estimated. As we reported in November 2014, the Navy’s strategy to achieve the cost cap: 1) relies on optimistic assumptions of construction efficiencies and cost savings; (2) shifts work—including installation of mission systems—needed to make the ship fully operational until after ship delivery; and (3) delivers the ship with the same baseline capability as CVN 78, with the costs of a number of planned mission system upgrades and modernizations postponed until future maintenance periods. Even with ambitious assumptions and planned improvements, the Navy’s current estimate for the CVN 79 stands at $11.5 billion—already at the cost cap. For perspective, the Director of the Department of Defense’s (DOD) Cost Assessment and Program Evaluation office projects that the Navy will exceed the congressional cost cap by about $235 million. The Congressional Budget Office estimates for CVN 79 are even higher; at a total cost of over $12.5 billion—which, if realized, would be over $1 billion above the current congressional cost cap. Similar to CVN 78, the Navy is assuming the shipbuilder will achieve efficiency gains that are unprecedented in aircraft carrier construction. While the shipbuilder has initiated significant revisions in its processes for building the ship that are expected to reduce labor hours, the Navy’s cost estimate for CVN 79 is predicated on an over 9 million labor hour reduction compared to CVN 78. For perspective, this estimate is not only lower than the 42.7 million hours originally estimated for CVN 78, it is 10 percent lower than what was achieved on CVN 77, the last Nimitz-class carrier. Previous aircraft carrier constructions have reduced labor hours by 3.2 million hours at most. Further, the Navy estimates that it will save over $180 million by replacing the dual band radar in favor of an alternative radar system, which it expects will provide a better technological solution at a lower cost. Cost savings are assumed, in part, because the Navy expects the radar to work within the current design parameters of the ship’s island. However, the Navy has not yet awarded a contract to develop the new radar solution. If design modifications are needed to the ship’s island, CVN 79 costs will increase, offsetting the Navy’s estimate of savings. Again for perspective, the Navy initially planned to install DBR on CVN 77 and it has taken the Navy over 10 years to develop the DBR, which is still not yet through testing. Finally, achieving the legislative cost cap of $11.5 billion is predicated on executing a two-phased delivery strategy for CVN 79, which will shift some construction work and installation of the warfare and communications systems to after ship delivery. By design, this strategy will result in a less capable and less complete ship at delivery—the end of the first phase—as shown in figure 3 below: According to the Navy, delaying procurement and installation of warfare and communications systems will prevent obsolescence before the ship’s first deployment in 2027 and allow the Navy to introduce competition for the ship’s systems and installation work after delivery. As we reported in November 2014, the Navy’s two-phased approach transfers the costs of a number of known capability upgrades, including decoy launching systems, torpedo defense enhancements, and Joint Strike Fighter aircraft related modifications, previously in the CVN 79 baseline to other (non-CVN 79 shipbuilding) accounts, by deferring installation to future maintenance periods. While such revisions reduce the end cost of CVN 79 in the near term, they do not reduce the ultimate cost of the ship, as the costs for these upgrades will eventually need to be paid—just at a later point in the ship’s life cycle. That CVN 78 will deliver at higher cost and less capability, while disconcerting, was predictable. Unfortunately, it is also unremarkable, as it is a typical outcome of the weapon system acquisition process. Along these lines, what does the CVN 78’s experience say about the acquisition process and what lessons can be learned from it? In many ways, CVN 78 represents a familiar outcome in Navy shipbuilding programs. Across the shipbuilding portfolio, cost growth for recent lead ships has been on the order of 28 percent (see figure 4). Figure 4 above further illustrates the similarity between CVN 78 and other shipbuilding programs authorized to start construction around the same time. Lead ships with the highest percentages of cost growth, such as the Littoral Combat Ships and DDG 1000, were framed by steep programmatic challenges. Similar to the CVN 78, these programs have been structured around unexecutable business cases in which ship construction begins prior to demonstrating key knowledge, resulting in costly, time-consuming, and out-of-sequence work during construction and undesired capability tradeoffs. Such outcomes persist even though DOD and Congress have taken steps to address long-standing problems with DOD acquisitions. These reforms emphasize sound management practices—such as realistic estimating, thorough testing, and accurate reporting—and were implemented to enhance DOD’s acquisition policy, which already provided a framework for managers to successfully develop and execute acquisition programs. Today these practices are well known. However, outcomes of the Ford-class program illustrate the limits of focusing on policy-and-practice related aspects of weapon system development without understanding incentives to sacrifice realism to win support for a program. Strong incentives encourage deviations from sound acquisition practices. In the commercial marketplace, investment in a new product represents an expense. Company funds must be expended and will not provide a return until the product is developed, produced, and sold. In DOD, new products represent a revenue, in the form of a budget line. A program’s return on investment occurs as soon as the funding is initiated. The budget process results in funding major program commitments before knowledge is available to support such decisions. Competition with other programs vying for funding puts pressure on program sponsors to project unprecedented levels of performance (often by counting on unproven technologies) while promising low cost and short schedules. These incentives, coupled with a marketplace that is characterized by a single buyer (DOD), low volume and limited number of major sources, create a culture in weapon system acquisition that encourages undue optimism about program risks and costs. To the extent Congress funds such programs as requested, it sanctions—and thus rewards—optimism and unexecutable business cases. To be sure, this is not to suggest that the acquisition process is foiled by bad actors. Rather, program sponsors and other participants act rationally within the system to achieve goals they believe in. Competitive pressures for funding simply favor optimism in setting cost, schedule, technical, and other estimates. The Ford-class program illustrates the pitfalls of operating in this environment. Optimism has pervaded the program from the start. Initially, the program sought to introduce technology improvements gradually over a number of successive carriers. However, in 2002, DOD opted to forgo the program’s evolutionary acquisition strategy, in favor of achieving revolutionary technological achievements on the lead ship. Expectations of a more capable ship were promised, with cost and schedule goals that were out of balance with the technical risks. Further, the dynamics of weapon system budgeting—and in particular, shipbuilding—resulted in significant commitments made well in advance of critical acquisition decisions, most notably, the authorization to start construction. Beginning in 2001, the Ford Class program began receiving advanced procurement funding to initiate design activities, procure long-lead materials, and prepare for construction, as shown in figure 5 below. By the time the Navy requested funding for construction of CVN 78 in 2007 it had already received $3.7 billion in advance procurement. It used some of these funds to build 13 percent of the ship’s construction units. Yet, at that time the program had considerable unknowns—technologies were immature and cost estimates unreliable. Similarly, in 2013, Congress had already appropriated nearly $3.3 billion in funding for CVN 79 construction. This decision was made even though the Navy’s understanding of the cost required to construct and deliver the lead ship was incomplete. A similar scenario exists today, as the Navy is requesting funding for advanced procurement of CVN 80, while also constructing CVN 78 and CVN 79. While these specifics relate to the Ford-class carrier, the principles apply to all major weapon system acquisitions. That is, commitments to provide funding in the form of budget requests, Congressional authorizations, and Congressional appropriations are made well in advance of major program commitments, such as the decision to approve the start of a program. At the time the funding commitments are made, less verifiable knowledge is available about a program’s cost, schedule, and technical challenges. This creates a vacuum for optimism to fill. When the programmatic decision point arrives, money is already on the table, which creates pressure to make a “go” decision, regardless of the risks now known to be at hand. The environment of Navy shipbuilding is unique as it is characterized by a symbiotic relationship between buyer (Navy) and builder. This is particularly true in the case of aircraft carriers, where there is only one domestic entity capable of constructing, testing, and delivering nuclear- powered aircraft carriers. Consequently, the buyer has a strong interest in sustaining the shipbuilder despite shortfalls in performance. Under such a scenario, the government has a limited ability to negotiate favorable contract terms in light of construction challenges and virtually no ability to walk away from the investment once it is underway. The experiences of the Ford-class program are not unique—rather, they represent a typical acquisition outcome. The cost growth and other problems seen today were known to be likely in 2007—before a contract was signed to construct the lead ship. Yet CVN 78 was funded and approved despite a knowingly deficient business case; in fact, the ship has been funded for nearly 15 years. It is too simplistic to look at the program as a product of a broken acquisition process; rather it is indicative of a process that is in equilibrium. It has worked this way for decades with similar outcomes: weapon systems that are the best in the world, but cost significantly more, take longer, and perform less than advertised. The rules and policies are clear about what to do, but other incentives force compromises of good judgment. The persistence of undesirable outcomes such as cost growth and schedule delays suggests that these are consequences that participants in the process have been willing to accept. It is not broken in the sense that it is rational; that is, program sponsors must promise more for less in order to win funding approval. This naturally leads to an unexecutable business case. Once funded and approved, reality sets in and the program must then offer less for more. Where do we go from here? Under consideration this year are a number of acquisition reforms. While these aim to change the policies that govern weapon system acquisition, they do not sufficiently address the incentives that drive the behavior. As I described above, the acquisition culture in general rewards programs for moving forward with unrealistic business cases. Early on, it was clear that the Ford-class program faced significant risks due to the development, installation and integration of numerous technologies. Yet, these risks were taken on the unfounded hope that they were manageable and that risk mitigation plans were in place. The budget and schedule did not account for these risks. Funding approval— authorizing programs and appropriating funds are some of the most powerful oversight tools Congress has. The reality is once funding starts, other tools of oversight are relatively weak—they are no match for the incentives to over-promise. Consequently, the key is to ensure that new programs exhibit desirable principles before they are approved and funded. There is little that can be done from an oversight standpoint on the CVN 78. In fact, there is little that can be done on the CVN 79, either. Regardless of how costs will be measured against cost caps, the full cost of the ships—as yet unknown—will ultimately be borne. For example, while the Joint Precision Approach and Landing System has been deferred from the first two ships, eventually it will have to be installed on them to accept the F-35 fighter. The next real oversight opportunity is on the CVN 80, which begins funding in fiscal year 2016. Going forward, there are two acquisition reform challenges I would like to put on the table. The first is what to do about funding. Today, DOD and Congress must approve and fund programs ahead of major decision points and key information. With money in hand, it is virtually impossible to disapprove going forward with the program. There are sound financial reasons for making sure money is available to execute programs before they are approved. But they are also a cause of oversold business cases. Second, in the numerous acquisition reform proposals made recently, there is much for DOD to do. But, Congress, too, has a role in demanding realistic business cases through the selection and timing of the programs it chooses to authorize and fund. What it does with funding sets the tone for what acquisition practices are acceptable. Mr. Chairman and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff has any questions about this statement, please contact Paul L. Francis at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Diana Moldafsky, Assistant Director; Charlie Shivers; Burns C. Eckert; Laura Greifner; Kelsey Hawley; Jenny Shinn; Ozzy Trevino; Abby Volk; and Alyssa Weir. Improve the realism of CVN 78’s budget estimate. Improve Navy’s cost surveillance capability. DOD Response and actions While the department agreed with our recommendations in concept, it has not fully taken action to implement them. The CVN 78 cost estimate continues to reflect undue optimism. Conduct a cost-benefit analysis on required CVN 78 DOD agreed with the need for a cost-benefit capabilities, namely reduced manning and the increased sortie generation rate prior to ship delivery. analysis, but did not plan to fully assess CVN 78 capabilities until the completion of operational testing after ship delivery. Update the CVN 78 test plan before ship delivery to allot sufficient time after ship delivery for land based testing to complete prior to shipboard testing. Adjust the CVN 78 planned post-delivery test schedule to ensure that system integration testing is completed before IOT&E. DOD agreed with our recommendation to update Defer the CVN 79 detail design and construction the CVN 78 test plan before delivery and has since updated the test and evaluation master plan (TEMP). However, it did not directly address our recommendation related to ensuring that sufficient time is allotted to complete land-based testing prior to beginning integrated testing. contract until land-based testing for critical systems was complete and update the CVN 79 cost estimate on the basis of actual costs and labor hours needed to construct CVN 78 during the recommended contract deferral period of CVN 79. DOD partially agreed with our recommendation to adjust the CVN 78 planned post-delivery schedule but current test plans still show significant overlap between integrated test events and operational testing. DOD disagreed with our recommendation to defer the award of the CVN 79’s detail design and construction contract. However, shortly after we issued our report, the Navy postponed the contract award citing the need to continue contract negotiations. While DOD did not agree to defer the CVN 79 contract as recommended, it did agree to update the CVN 79 cost estimate on the basis of CVN 78’s actual costs and labor hours. DOD has updated CVN 79’s budget estimate which we note is based on optimistic assumptions. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Navy set ambitious goals for the Ford-class program, including an array of new technologies and design features that were intended to improve combat capability and create operational efficiencies, all while reducing acquisition and life-cycle costs. The lead ship, CVN 78, has experienced significant cost growth with a reduced capability expected at delivery. More cost growth is likely. While CVN 78 is close to delivery, examining its acquisition history may provide an opportunity to improve outcomes for the other ships in the class and illustrate the dynamics of defense acquisition. GAO has reported on the acquisition struggles facing the Ford-class, particularly in GAO-07-866 , GAO-13-396 , and GAO-15-22 . This statement discusses: (1) the Navy's initial vision for CVN 78 and where the ship stands today; (2) plans for follow-on ship cost and construction; and (3) Ford-class experiences as illustrative of acquisition decision making. This statement is largely based on the three reports as well as GAO's larger work on shipbuilding and acquisition best practices, and also incorporates updated audit work where appropriate. The Ford-class aircraft carrier's lead ship began construction with an unrealistic business case. A sound business case balances the necessary resources and knowledge needed to transform a chosen concept into a product. Yet in 2007, GAO found that CVN 78 costs were underestimated and critical technologies were immature—key risks that would impair delivering CVN 78 at cost, on-time, and with its planned capabilities. The ship and its business case were nonetheless approved. Over the past 8 years, the business case has predictably decayed in the form of cost growth, testing delays, and reduced capability—in essence, getting less for more. Today, CVN 78 is more than $2 billion over its initial budget. Land-based tests of key technologies have been deferred by years while the ship's construction schedule has largely held fast. The CVN 78 is unlikely to achieve promised aircraft launch and recovery rates as key systems are unreliable. The ship must complete its final, more complex, construction phase concurrent with key test events. While problems are likely to be encountered, there is no margin for the unexpected. Additional costs are likely. Similarly, the business case for CVN 79 is not realistic. The Navy recently awarded a construction contract for CVN 79 which it believes will allow the program to achieve the current $11.5 billion legislative cost cap. Clearly, CVN 79 should cost less than CVN 78, as it will incorporate lessons learned on construction sequencing and other efficiencies. While it may cost less than its predecessor, CVN 79 is likely to cost more than estimated. As GAO found in November 2014, the Navy's strategy to achieve the cost cap relies on optimistic assumptions of construction efficiencies and cost savings—including unprecedented reductions in labor hours, shifting work until after ship delivery, and delivering the ship with the same baseline capability as CVN 78 by postponing planned mission system upgrades and modernizations until future maintenance periods. Today, with CVN 78 over 92 percent complete as it reaches delivery in May 2016, and the CVN 79 on contract, the ability to exercise oversight and make course corrections is limited. Yet, it is not too late to examine the carrier's acquisition history to illustrate the dynamics of shipbuilding—and weapon system—acquisition and the challenges they pose to acquisition reform. The carrier's problems are by no means unique; rather, they are quite typical of weapon systems. Such outcomes persist despite acquisition reforms the Department of Defense and Congress have put forward—such as realistic estimating and “fly before buy.” Competition with other programs for funding creates pressures to overpromise performance at unrealistic costs and schedules. These incentives are more powerful than policies to follow best acquisition practices and oversight tools. Moreover, the budget process provides incentives for programs to be funded before sufficient knowledge is available to make key decisions. Complementing these incentives is a marketplace characterized by a single buyer, low volume, and limited number of major sources. The decades-old culture of undue optimism when starting programs is not the consequence of a broken process, but rather of a process in equilibrium that rewards unrealistic business cases and, thus, devalues sound practices. GAO is not making any new recommendations in this statement but has made numerous recommendations to the Department of Defense in the past on Ford-class acquisition, including strengthening the program's business case before proceeding with acquisition decisions. While the Department has, at times, agreed with GAO's recommendations it has taken little to no action to implement them. |
Commuter demand and congestion between New Jersey and New York City across the Hudson River is projected to increase as the limited passenger rail infrastructure continues to age, highlighting the need for improvements to the trans-Hudson commuter rail system into Manhattan. Planning agencies have forecasted that, fueled by population growth in regions west of the Hudson River and employment within Manhattan, demand for mass transit service crossing the Hudson River between New Jersey and nearby counties in New York and midtown Manhattan will grow by about 38 percent by 2030. This could result in more congestion and longer delays on existing roads, bridges, passenger rail, and other public transportation modes crossing the Hudson River. At the same time, the aging passenger rail infrastructure—comprising two single-track tunnels under the Hudson River leading to New York Penn Station—limits commuter rail capacity into Manhattan. The 100-year-old tunnels cannot meet the access and mobility demands of the future, given the projected growth in the region. In 1995, the three major local transit agencies—NJT, the Port Authority, and the Metropolitan Transportation Authority—jointly conducted a major investment study to consider ways to improve access between midtown Manhattan and the growing population west of the Hudson River. They evaluated more than 100 alternatives, including commuter railroad, bus, light rail, subway, automobile, and ferry. The study, completed in 2003, recommended three alternatives for advancement to the federal environment impact process. While these alternatives would have provided more train capacity and were expected to meet projected demand, they did not share all of the elements of the final ARC project. In the draft environmental impact statement, published in 2007, NJT identified the alternative that became the final ARC project. Project development and refinements continued until completion of the environmental review process and entry of the project into final design in 2009. Figure 1 shows the new tracks, tunnel, and station that the project would have built. In addition, the project would have added a yard in New Jersey for storing trains that are not in service during the middle of the day, five station entrances at the New York Penn Station Expansion, and three elevator entrances that met the Americans with Disabilities Act requirements. NJT applied for federal funding for a portion of ARC costs through FTA’s New Starts program.agencies on a largely competitive basis primarily for the construction of new fixed-guideway transit systems and the expansion of existing fixed- guideway systems. Federal funding for the construction of New Starts projects is committed in a full funding grant agreement, which is a Under this program, funding is directed to public multiyear funding agreement between the federal government and a public agency. Although the ARC project was cancelled prior to obtaining a full funding grant agreement, FTA provided some federal funding for preliminary engineering, final design, and a portion of construction costs for the project. The construction funding was provided through an early system work agreement. Appendix I provides an overview of the New Starts process. While NJT sponsored the project and would have been the prime operator of services on the completed project, state and local funding for ARC would have come from the New Jersey Turnpike Authority and the Port Authority. As part of the federal planning process for transportation, the region’s two metropolitan planning organizations—the North Jersey Transportation Planning Authority and the New York Metropolitan Transportation Council—adopted the project into their metropolitan transportation improvement plans, as required for federal funding. While the New Jersey governor had affirmed support for the ARC project in an April 6, 2010, letter to the Secretary of Transportation, on October 27, 2010, the governor announced the cancellation of the project, citing potential cost growth and the state’s fiscal condition. At the time of cancellation, NJT had completed most of the requirements needed to obtain additional federal funding. In particular, NJT had completed an in- depth environmental review and received FTA’s commitment of $601 million in New Starts funds to pay for initial construction activities. At the time of cancellation, NJT was negotiating the final cost estimate of the project with FTA in order to obtain the full funding grant agreement. This agreement would have provided the commitment for the full federal share of funds for the project. According to the studies we reviewed, the ARC project would have provided a significant increase in rail capacity for moving commuters between New Jersey and New York. NJT and other planning organization officials said that increases in capacity were a key mobility benefit of the project. The tunnel would have added two train tracks under the Hudson River, and as a result: The number of trans-Hudson peak hour trains (from 7:30 a.m. to 8:30 a.m.) would have more than doubled—from 23 to 48 trains per hour. The peak hour use of passenger capacity would have decreased from a near-capacity 95 percent to 60 percent at completion, providing additional capacity to accommodate future passenger growth. The benefits of other planned NJT rail expansions would have been enhanced. With this increase in capacity, projections made as part of the project’s environmental study showed an anticipated increase in transit ridership as follows: Daily trips between New Jersey and New York Penn Station would have increased from about 174,000 without the project to about 254,000 (a 46 percent increase) with the project by 2030. Considering the effects on other transit facilities, the project would have generated about 32,500 new daily transit trips across the Hudson by 2030. The ARC project would have reduced the need for passengers to transfer between trains, meaning many riders could commute on only one train. Passenger transfers lengthen commuting times and avoiding transfers provides a benefit to riders. As a result of the ARC project, it was estimated that: Five existing NJT lines would have no longer required passengers to transfer trains to get to Manhattan. Daily passenger transfers would have declined from about 32,100 without the project to 1,000 with the project, a 97 percent reduction, as estimated in the environmental study. Riders travelling between New Jersey and Manhattan would have experienced an average of 23 minutes of travel time savings per trip. By building a second rail tunnel between New Jersey and Manhattan, the ARC project would have increased the overall reliability of rail service and added flexibility during service disruptions. A disruption of service in the existing NJT tunnel for any reason can result in major delays. Currently, one 15-minute train disruption in the existing tunnel can delay as many as 15 other NJT and Amtrak trains. The ARC project would have provided: Flexibility to reroute trains from one tunnel to the other, if necessary. Continuous weekend service as new tunnels could remain open during tunnel maintenance. (Currently, with only one tunnel, traffic must be limited to perform necessary maintenance.) Better reliability, allowing for faster transit. Average scheduled time from Newark, New Jersey, to Manhattan would decrease by 5 minutes during peak times and 3.5 minutes off-peak. Even with the added trans-Hudson commuters, the environmental study found that the new station would have reduced crowding at the adjacent New York Penn Station: Average passenger egress time from New York Penn Station would have decreased from 80 to 60 seconds (a 25 percent decrease). The new station would have resulted in a projected decrease in peak hour ridership at New York Penn Station of 37 percent—from about 27,800 passengers without the project to 17,200 with the project in 2030—thus alleviating crowding. Additionally, the environmental study estimated that, in general, the increased rail capacity across the Hudson River would have reduced the amount of travel by automobile that would otherwise occur. Port Authority officials told us that this increased rail capacity would help ease road congestion for trans-Hudson commutes. Specifically, the study projected that by 2030: Daily trans-Hudson automobile trips would be reduced by about 22,100 trips, or 4.9 percent, compared to the number of automobile trips without the project. Daily automobile vehicle miles traveled would have been reduced by about 590,000 miles compared to vehicle miles traveled without the project. Daily automobile vehicle hours traveled would have been reduced by about 22,000 hours compared to vehicle hours traveled without the project. According to the environmental study, mobility may further deteriorate without the ARC project. The New York City region faces serious mobility issues and, as we have mentioned previously in this report, travel demand is projected to increase significantly. Environmental study forecasts estimated that trans-Hudson transit travel demand would rise from about 550,000 riders in 2005 to about 760,000 in 2030, an increase of about 38 percent. Without the tunnel, the environmental study projected that demand would not be met, and congestion and delays would increase. All the major trans-Hudson crossings—NJT, the Port Authority Trans-Hudson (PATH), and vehicular tunnels and bridges—are at or near capacity. According to the environmental study, the increased demand would stress the entire transportation network, including roadway, bus, ferry, and commuter rail systems. However, it is difficult to precisely determine the long-term effects of not building the tunnel because various other agencies are building, planning, or exploring the possibility of transportation improvements that could affect overall mobility in the region. Local transportation officials cited a number of projects that could affect congestion and commutes in the region, although some are at the conceptual phase, and may or may not be built. Possible projects include the extension of a subway line from New York City to New Jersey, Amtrak’s proposal to add a train line from New Jersey into New York City, bridge and transit tunnel improvements, a new bus terminal, and improvements to help freight flows into New York. Thus, the overall effect of canceling the ARC project must be understood in the regional context, and the effect is dependent on what transpires with these other projects. Studies estimated the ARC project would have generated economic activity in the region that would have affected jobs and personal income, business activity, and home values, among other things. Most of the economic effects were expected during the building phase of the project. The studies we reviewed used regional economic models to measure the economic effects. However, the results of these models depend on larger economic conditions, such as the level of unemployment. The results cannot be regarded as certain in all economic conditions. The studies addressed several aspects of economic activity as follows: Jobs and personal income. The environmental study estimated that during construction the ARC project would have provided about 59,900 jobs directly onsite and total additional employment in the region of about 98,300 jobs. The environmental study also suggested that over the longer term, the rail line would have required an estimated 410 jobs directly in transportation. Another study estimated that the project would generate about 5,700 construction- related jobs each year during the 9-year construction.years after completion of the project, the same study estimated the region would gain 44,000 new jobs as a result of improved access, which would make the region more competitive compared to other In addition, 10 The same study estimated that 10 years after completion, regions.the project would have added almost $4 billion in personal income to the region, in 2006 dollars. Business activity. The ARC environmental study estimated the project would have produced an additional $9 billion in business activity during construction and $120 million per year in business activity over the long term. Home values. Another study estimated that houses in New Jersey communities served by the ARC project would see an average increase in home value of $19,000, or 4.2 percent, resulting from more efficient local travel and improved access to high paying jobs in New York City. Tax revenues. Studies also indicated that increased tax revenues would have resulted from the increases in economic activity from the ARC project. The environmental study estimated that during construction, $1.5 billion in federal, state, and local taxes would have been generated, as well as an additional $16 million annually after the project was completed. Another study estimated that the project would result in an additional $375 million each year in property taxes generated by local governments. Ibid. growth may simply shift to another part of the region or nation. Third, the project’s economic impact also depends on how it was financed. Deficit financing—borrowing—provides an increase in the total amount of spending, which will have economic effects. In contrast, financing the project through taxes means that existing government and household spending to some extent is simply directed a certain way, rather than increasing the total amount of such spending. Analyzing the impact of the project in the context of these variables—the unemployment rate when the project is being built and project financing—was beyond the scope of the studies we reviewed. The net impact on housing prices is also difficult to assess. First, the analyses—done several years ago—may not fully capture the effects of recent declines in the housing market. Second, impacts on the housing market throughout the metropolitan area would, to some extent, reflect population shifts—some house prices may go up as a consequence of improved access to transit, while prices in other less desirable locations may go down. However, shifting the location of households and business activity does not necessarily expand the overall economy. Also, benefits to homeowners and commuters from the project would significantly overlap, since they are to some extent the same people; that is, the change in a homeowner’s real estate value is the result of the improvement in travel time. Finally, even though the project was cancelled, all of the anticipated economic activity was not necessarily lost. For example, according to Port Authority officials, the Port Authority redirected funds it had allocated to the ARC project to other projects in the region, which could increase employment and economic activity tied to those projects. Likewise, funds that New Jersey planned to allocate to the ARC project were reallocated to the state’s highway trust fund, which would then support economic activity related to highway projects. However, these highway projects would not necessarily be in the New York City region. The ARC environmental study estimated the project would have created limited, but mostly positive environmental effects. (See fig. 2.) The primary positive effect would have been a long-term reduction in air pollution, although it is difficult to predict how much this reduction in pollutants would affect the entire New York City region. Air quality effects are of particular relevance in the development of transit projects. FTA, pursuant to law, includes whether a project is in an area that has not attained air quality standards required by the Clean Air Act as a factor in selecting projects for the New Starts program. According to the Environmental Protection Agency, the entire New York City region is out of compliance with certain ambient air quality standards that are designed to protect public health. The project would reduce automobile trips and thereby decrease emissions that contribute to existing air quality problems in the region and related public health problems. According to the Environmental Protection Agency, adverse health effects associated with air pollutants include increased respiratory symptoms, hospitalization for heart or lung disease, and premature death. Local transportation agency officials told us that air quality factors were important when considering the potential environmental effects of the ARC project. Over the long term, air quality would have been positively affected due to an estimated overall daily decrease of about 590,000 in vehicle miles traveled in the region and about 22,100 fewer trans-Hudson vehicle trips. While longterm air quality effects were generally positive in nature, the results of these changes would be dispersed over the entire metropolitan area, and were too difficult to estimate for the New York region, as noted in the environmental study. According to the environmental study, other adverse environmental effects would have been short term and mitigated. Among the environmental effects were negative effects on air quality, mainly related to dust created by excavation and construction and exhaust emissions from equipment, noise, potential storm water runoff, vibration, potential soil erosion, and potential disturbance of various contaminated sites. FTA determined that these short-term negative effects were adequately addressed by mitigation plans. In 2003, the first cost estimates for the concept of a new commuter rail tunnel between New Jersey and New York—developed by NJT and other local agencies in the major investment study—ranged from $2.9 billion to $3.6 billion (in year 2000 dollars). These estimates were for a project that was largely conceptual and did not rely on significant engineering design work. Further, not all project costs and elements were included in these estimates. In 2006, after the sponsoring agencies selected a locally preferred alternative, FTA accepted $7.4 billion as the first cost estimate for the project. This estimate included an expanded New York Penn Station as well as construction, engineering, oversight, and management costs; operational systems; rolling stock; real estate; startup cost; and environmental mitigation. ARC project cost estimates increased over time as shown in table 1. In general, changes in cost estimates throughout the process of planning and designing a transportation project are normal and may happen for a number of reasons.paper to final design and construction, a more accurate understanding of what a project entails may evolve. The change in cost estimates may reflect a more accurate understanding of what actually constitutes the First, as a project progresses from a concept on project. For example, according to Port Authority officials, early in the project they learned that there were no existing surveys of New York Penn Station, and they had to survey the station before detailed designs could be developed. As shown in figure 3, cost estimates are more uncertain at the beginning of a project (the range is wide), because less is known about its detailed design and construction requirements, and therefore the opportunity for change is greater. Second, costs can appear to change if they are not expressed in a consistent manner, that is, in constant year dollars (to eliminate any inflationary effects) versus year of expenditure dollars (that may mask any changes in real terms because of inflation). Third, project cost estimates are sensitive to factors such as changes to the scope of the project. In some cases, a sponsor may reduce the scope or add more features to the project as the design progresses. Uncertainty of the costs is reduced, as the project scope is better defined, but costs also may increase. Fourth, cost estimates can change as risks are assessed and reassessed throughout project development, resulting in the amount FTA requires project sponsors to set aside for project contingency to increase or decrease. For example, FTA officials said risk factors could include changes in real estate costs, new information involving surface or subsurface ground conditions and materials, or the degree of competition among contractors. According to FTA officials, risks like these can affect the cost of a project, and sponsors may never adequately address all of them, but at a minimum both the sponsor and FTA must be aware of what those risks are. The ARC project cost estimates increased from the $7.4 billion estimate in 2006 for a number of reasons: In 2008, FTA’s cost estimates ranged from $9.5 billion to $12.4 billion, based on potential scenarios in its 2008 Risk Assessment, which not only assumed different levels of risk but also included $1.7 billion set aside for contingency. After discussions, FTA and NJT agreed upon a baseline cost estimate of $8.7 billion in 2009. FTA’s 2010 Risk Assessment contained the next estimated cost—as high as $13.7 billion—as the engineers developed a more accurate understanding of what the project entailed. However, NJT did not see costs rising to this level and projected a lower expected cost range, including a maximum $10 billion final cost. After considering comments from NJT, FTA revised the cost range to $9.8 billion to $12.4 billion. This estimate included a more refined cost estimate of potentially higher construction and other work costs. In addition, the contingency amount was increased due to reassessment of risks related to delays in awarding project contracts. Federal, state, and local sources would have funded the ARC project, as shown in table 2. As of April 2010, about half the estimated cost of about $8.7 billion would have come from federal sources with the remainder divided at the local and state levels between the Port Authority and the New Jersey Turnpike Authority. In addition to New Starts funds, New Jersey was planning to use certain federal highway funds that may be used for transit capital purposes. New Jersey planned to use part of its federal Congestion Mitigation and Air Quality Improvement and National Highway System funding for the ARC project. State and local funds included $3 billion from the Port Authority, which formally approved this funding commitment. The state of New Jersey planned to add $1.25 billion that was to have come from increased tolls on the New Jersey Turnpike. In August 2009, FTA entered into an early system work agreement with NJT. This agreement, which FTA and NJT amended in 2010, made available about $910.3 million for certain project activities, such as tunnel construction contracts, property and easement acquisitions in New York, professional services related to the project’s final design, construction permits, insurance, and a contingency reserve.expended about $271 million of the $910.3 million. When the project was cancelled, the Department of Transportation claimed that the $271 million in expended federal funds should be recovered by the federal government, and New Jersey disputed this claim. On September 30, As of 2010, NJT 2011, the Department of Transportation and New Jersey agreed that New Jersey would return $95 million, which included $51 million in New Starts funds and $44 million in American Recovery and Reinvestment Act funds. In addition, New Jersey agreed to spend about $128 million in Congestion Mitigation and Air Quality Improvement funds on transit projects approved by the Department of Transportation. Because the project was terminated before FTA and NJT entered into a full funding grant agreement, there was no final commitment by all the parties to fully fund the project. The general project agreement, which was a document prepared as part of the New Starts process and signed by NJT and the Port Authority in 2009, addressed potential cost growth. According to the agreement, if costs exceeded $8.766 billion (or if less than $3 billion was provided by FTA), both parties agreed to work together to obtain additional funding sources. According to Port Authority officials, although both parties signed the agreement, there was no commitment of assistance from the Port Authority in the event that the project experienced cost increases. Port Authority officials told us that the agency’s existing $3 billion commitment was the maximum the agency could provide to the project, given the constraints of their overall capital program. In the weeks preceding the project’s cancellation, the Secretary of Transportation and the governor of New Jersey held discussions on additional funding sources for the ARC project or a reduction in project scope. The additional funding options discussed included increased funding by the federal government, New Jersey, and the Port Authority; a federal railroad loan; or a public-private Because the project was terminated before a partnership contribution.full funding grant agreement was entered into between FTA and NJT, there was no final agreement by all the parties on the issue of responsibility for ARC cost growth. The Department of Transportation reviewed a draft of this report and provided technical comments, which we incorporated in the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of Transportation, and the Administrator of the Federal Transit Administration. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix II. The Federal Transit Administration (FTA) provided federal funding for a portion of the Access to the Region’s Core costs through its New Starts program. Under this program, funding is directed to public agencies on a largely competitive basis primarily for the construction of new fixed- guideway transit systems and the expansion of existing fixed-guideway systems. Federal funding for construction of New Starts projects is committed in a document that is called a full funding grant agreement—a multi-year agreement between the federal government and a public agency that is subject to the availability of appropriations. The agreement establishes the terms and conditions for federal financial participation, including the maximum amount of New Starts funding being committed. To obtain this grant agreement, a project must be approved by FTA for final design and construction and have gone through a series of steps that make up the New Starts approval process. Among the phases of the New Starts planning and development process are: systems planning, alternatives analysis, preliminary engineering, and final design. Systems planning. Systems planning involves the continuing regional transportation planning process carried out by metropolitan planning organizations in urban areas throughout the United States. This process produces long-range transportation plans and shorter-range transportation improvement programs, along with environmental and other analyses. Alternatives analysis. The analysis of alternatives examines the benefits and costs of different options, such as light rail or bus rapid transit, in a specific transportation corridor or in a regional sub-area. It concludes with the selection of a locally preferred alternative and adoption of that alternative into a fiscally constrained long-range transportation plan. The project sponsor submits the proposed project to FTA for evaluation so as to gain approval to enter preliminary engineering, the next phase of development. FTA evaluation does not include a full cost-benefit analysis, but does consider cost- effectiveness and other benefits of the proposed project. Preliminary engineering. Preliminary engineering involves the project sponsor refining the project by examining the costs, benefits, and impacts of different design alternatives, and completing an analysis of environmental impacts as required by the National Environmental Policy Act of 1969. Once preliminary engineering is complete, FTA evaluates and rates the project to determine whether it can be approved into final design. Final design. In the project’s final design phase, the project sponsor prepares final construction plans and cost estimates, and, if needed, includes right-of-way acquisition and relocation of utilities. After final design is complete, FTA may approve the project for a full funding grant agreement, at which point the project may move into the construction phase. In some cases, FTA may obligate some of the funding expected to be provided in the full funding grant agreement through an early system work agreement. Although not a guarantee of full funding, an early system work agreement provides funding so that work can begin before full funding is awarded. In addition to the contact named above, Teresa Spisak (Assistant Director), Robert Ciszewski, Alexander Lawrence, David Hooper, Hannah Laufe, Joshua Ormond, Amy Rosewarne, and Max Sawicky made key contributions to this report. | Studies have estimated that transit travel demand between New Jersey and Manhattan will increase by 38 percent by 2030. The Access to the Regions Core commuter rail project was designed to help meet that rising demand. In October 2010, the governor of New Jersey, citing potential cost growth and the states fiscal condition, withdrew state support and cancelled the project. The New Jersey Transit (NJT) was the lead agency for the project, supported by the Port Authority of New York and New Jersey (Port Authority). The project was to be partially funded under the Federal Transit Administrations (FTA) New Starts program. GAO was asked to examine (1) what would have been the mobility, economic, and environmental benefits of the project according to major planning studies; (2) the project cost estimates over time; and (3) how, if at all, documents prepared as part of the New Starts process addressed potential cost growth for the project. GAO reviewed the literature and major project planning studies, FTA reports, and economic and cost estimates by NJT and other planning organizations. GAO interviewed officials from FTA, state and local transit agencies, and local planning organizations. GAO is making no recommendations in this report. The Department of Transportation provided technical comments, which GAO incorporated in the report. Studies estimated that the Access to the Regions Core commuter rail project would have provided mobility benefits, but other benefits would either have been limited or are difficult to measure. According to various studies: The project would have helped meet the projected increase in travel demand and improved mobility by doubling the number of daily peak period trains, and significantly increasing daily trips between New Jersey and Manhattanfrom about 174,000 without the project to 254,000 with the project by 2030while reducing transfers and station crowding and improving reliability of service. The project potentially would have generated economic activity in the region in the form of jobs and income, business activity, and increased home values, but many economic effects were hard to predict with certainty. For example, the extent to which the project would shift the location of economic activity, versus providing additional net economic activity, is uncertain. The project was estimated to have created limited but mostly positive environmental effectsin particular, improved air qualityand included measures to mitigate negative effects such as noise and storm water runoff. Over time, the cost estimates for the project increased from an initial estimate of $7.4 billion in 2006. In 2008 and 2010, FTA performed risk assessments and revised the cost estimate. FTA and NJT agreed upon a baseline cost estimate of $8.7 billion in 2009. After considering comments from NJT, which projected lower costs than FTA, FTA revised its estimate and issued a cost estimate of $9.8 billion to $12.4 billion in October 2010. As of April 2010, federal sources were expected to fund about half the cost, with the remainder divided between New Jersey Turnpike funds and the Port Authority. Because the project was terminated before FTA and NJT entered into a full funding grant agreement, there was no final agreement by all the parties on the issue of responsibility for project cost growth. While the Secretary of Transportation and the governor of New Jersey held discussions on additional funding options, planning documents did not address the source of funding of potential cost growth for the project. |
For over 200 years, the Postal Service and its predecessor have operated with a statutorily imposed monopoly restricting the private delivery of letters. The monopoly was created by Congress as a revenue protection measure to enable the postal system to fulfill its mandate of providing uniform rates for at least one class of letter mail and delivery of letter mail to patrons in all areas, however remote. The monopoly was established in a set of criminal and civil laws called the Private Express Statutes (the Statutes) (18 U.S.C. 1693-1699 and 39 U.S.C. 601-606). A related law prohibits persons from placing letters without postage into a mailbox (18 U.S.C. 1725). Violators of these restrictions are subject to maximum fines of $5,000 for individuals and $10,000 for organizations and, in some cases, imprisonment. For purposes of the Statutes, the definition of a letter is established by the Postal Service in regulation. The Postal Service broadly defined a letter as any message directed to a specific person or address and recorded in or on a tangible object. (See 39 C.F.R. 310.1.) Although Congress has reviewed the need for the monopoly and has broadened or reduced it at various times over the past 200 years to accommodate changes in technology and transportation, the statutory monopoly has generally remained intact. The Postal Reorganization Act of 1970 did not change it, but Congress directed the newly established Postal Service Board of Governors to evaluate the need to modernize the Statutes. In a 1973 report, the Board recommended no change, stating its belief that the Statutes were still needed as a revenue protection measure to prevent “cream-skimming,” i.e., competitors offering service on low-cost routes at low prices, leaving the Service with high-cost routes. Since the 1970 reorganization, however, the Service has narrowed the scope of the monopoly by exempting certain types of correspondence from the definition of a letter in its regulations and by suspending the Statutes for other letters. Under the Postal Reorganization Act of 1970 (the 1970 Act), Congress expected the Service to operate in a businesslike manner while, at the same time, fulfilling its mission as a public service. To this end, Congress removed the Service from its position in the Cabinet and made it an “independent establishment of the executive branch.” It exempted the Service from many of the laws that apply to federal agencies and gave the Board of Governors the sole power to appoint and fire the Postmaster General and the Deputy Postmaster General. However, the Service is not a business entity. It is subject to congressional oversight and to certain laws that apply to other parts of the executive branch. It is also required to submit proposed changes in postal rates and fees and in postal classifications and products to the independent Postal Rate Commission. Proposed changes are subject to a review process, which includes public hearings where interested parties, including the Service’s competitors, can voice their views. In 1995, about 90 percent of all U.S. mail originated with business or institutional mailers and the remaining 10 percent with households. Letters fall almost entirely into two classes: First-Class (including Priority Mail) and third-class, which consists largely of advertisements. The Service has defined a letter broadly but has not determined precisely how much of the mail stream is subject to the Statutes. However, reasonable estimates of the protected mail can be made on the basis of the Service’s detailed breakouts of the mail stream the Service used in setting postage rates. The Commission used these breakouts to estimate that the vast majority (83 percent) of the Service’s overall mail volume meets the Service’s definition of a letter and thus is subject to the Statutes. (See fig. 1.) Represents estimated mail volume not protected by the Private Express Statutes (PES). Note 1: Total mail pieces in fiscal year 1995 were about 180.7 billion, about 464,000 pieces of which do not fall into the categories shown and are excluded from this figure. In June 1996, the Chairman of the Subcommittee on the Postal Service, House Committee on Government Reform and Oversight, introduced legislation (H.R. 3717) to reform the Postal Service. Under this bill, among other provisions, a new system for establishing postage rates, classes, and services would be established, and delivery of letter mail priced at less than $2.00 would be restricted to the Postal Service. According to the Subcommittee’s analysis, if the bill is enacted into law, more than 80 percent of the Service’s total revenue would still be protected by law and therefore the Service would still be provided sufficient revenue to carry out its mandates to the American public. The basic purpose of the letter mail monopoly has not changed in more than 200 years. That purpose is to ensure that the Postal Service has sufficient revenues to carry out its public service mandates, including regular mail delivery service (typically 6 days a week) to all communities. The mailbox restrictions also protect the Service’s revenue as well as increase the security of the mail by limiting legal access to mailboxes. Unlike its competitors, the Service has certain financial advantages, such as no requirement to pay income taxes and does not provide a return (e.g., dividends) to shareholders. However, it must also meet specific public service obligations and its ability to control operating costs and set postage rates competitively is constrained by law. It is not chartered or empowered to compete with private firms but rather is mandated to function as a public enterprise and provide mail service to all communities, not just those that are profitable to serve. Given its competitive environment and operating constraints, the Service has changed postage rates to recognize some of the variations in the cost of handling letters. Consequently, postage rates overall, including First-Class letter mail rates, have become less uniform since 1970. To illustrate, in 1970, First-Class mail had just two 1-ounce rates: 8 cents for regular mail and 11 cents for air mail. In 1995, First-Class mail had 8 rates. The rates now vary depending on such things as whether large mailers participate with the Service in “worksharing.” For a First-Class letter weighing up to 1 ounce, the worksharing rates range from 25.4 to 30.5 cents, compared to the 32-cent base rate. Unlike contract rates that private carriers negotiate with individual customers, these rates are available to all qualifying mailers. In 1970, no such discounts were offered to any mailers. Maintaining the current post office infrastructure also has become more expensive. This has occurred, in part, because of changes in the overall mix of mail. The volume of residential services, such as personal correspondence and stamp sales handled at post offices, has declined, while business volume has increased. For example, in 1995, bulk advertising mail was a much higher percentage of total mail volume than in 1970. Bulk mail typically is accepted at the Service’s mail processing plants rather than at post offices. The effects of these changes in mail mix can be seen in the financial operations of the Service’s post offices. According to Service data, of the 39,149 post offices it operated in fiscal year 1995, 17,702 (about 45 percent) reported taking in annual revenues that were lower than their aggregate expenses for the same year by about $1.1 billion. The Service is taking steps to upgrade many post offices and make them more accessible to customers. However, the 1970 Act contains detailed criteria and procedures that the Service must follow to close a post office, such as announcing a proposed closing and providing time for anyone affected to appeal the action to the Postal Rate Commission. Where private delivery has been permitted, the Service often has been unable to compete effectively because it charged higher prices or provided fewer or less dependable services than its competitors. Private carriers often use negotiated sales agreements to offer their customers lower rates and a broader range of services for overnight letters and packages as well as 2-day and 3-day package deliveries. Therefore, if the Statutes are relaxed to allow greater competition for letter mail delivery, the Service could lose more business to private firms unless it reduces its prices and improves the quality of its services. If the Service loses more business to private firms, it is concerned that its ability to provide the services mandated in the 1970 Act could be jeopardized. Its concern is heightened by anticipated losses of business mail volumes to electronic communications. Despite criminal sanctions for violations, enforcement of the Statutes rarely occurs and has proven to be problematic. In response to pressure from mailers and competitors, a bill (S. 1541, 103d Cong., 1st Sess. (1993))was introduced in October 1993 to limit the Service’s authority to fine or otherwise penalize mailers who used private carriers. Also in response to this pressure, the Service has not initiated a compliance audit against any mailer since 1994. Limited available data suggest that violations of the Statutes may be common. For example, the Postal Inspection Service completed audits of 62 mailers between October 1988 and June 1994. It found that 39 (63 percent) had violated the Statutes. The Service believes, nonetheless, that the Statutes remain a useful and necessary deterrent to widespread use of private firms for letter delivery, and it now relies primarily on education as the principal means of encouraging compliance. The Service has assigned primary responsibility for public education to its marketing staff. At times, the Service has yielded to pressure from competitors and mailers to allow more private letter delivery by issuing regulations to suspend the Statutes for certain types of letters. Several parties, such as the Air Couriers Conference of America and Postal Rate Commission staff, have questioned whether Congress intended that the Service allow more delivery of private letters by suspending provisions of the criminal statutes, thereby allowing more private letter delivery. The Service believes that the Statutes allow the Postmaster General to use such authority to permit private delivery of specified letters. However, some private sector competitors disagree with the Service. They are concerned that if private delivery of certain letters is only authorized administratively, the Service could, at any time, modify the regulations and restrict or eliminate competitors’ authority to continue delivering such letters. The restrictions on private delivery contained in the Statutes have been defended by a number of parties, including the Kappel Commission, the Board of Governors in its 1973 recommendation to Congress, and some experts on the economics of postal services. These parties usually offer one or more of three basic justifications: A single provider, currently the Postal Service, can operate at a lower total cost to the nation than multiple suppliers can. Without restrictions on private delivery, “cream-skimming” by private competitors in the most profitable postal markets would undermine the Service’s ability to provide universal service at reasonable, uniform rates. Postal services, historically, have been viewed as so important to binding the nation together that they should be essentially immune to disruption by labor disputes, bankruptcy, and other difficulties that private businesses face, regardless of whether this minimizes the cost to hard-to-serve customers or to the nation as a whole. In other words, the Service may minimize the cost to hard-to-serve customers, even if it does not minimize cost to the nation as a whole. The Postal Service believes that the above justifications remain valid today. However, several federal agencies, some of the Service’s largest customers and competitors, and many economists and other experts outside the Service question the justifications, either because they do not consider the policy goals (e.g., uniform rates) very important; or because they do not believe, as an empirical matter, that the Statutes are the best way of achieving them. Relevant literature shows that various economic arguments for and against the statutory restrictions on postal services have been made and debated. For example, many economists who have studied the postal monopoly seem to agree that mail delivery has more natural monopoly characteristics, i.e., lower unit cost per delivery as mail volumes increase, than other postal functions, such as transporting and sorting the mail. Those who argue that mail delivery should be treated as a natural monopoly suggest that with appropriate regulation, a single supplier of mail delivery services—but not necessarily other postal functions—would best serve the public interest, i.e., result in the lowest overall cost to postal customers. Others argue that if mail delivery reflects natural monopoly characteristics, a single service provider would emerge under free market conditions and deliver at the lowest possible cost. (See vol. II, ch. 2.) Even though the Statutes have remained largely intact, numerous national and local mail delivery firms are now in business. Moreover, their numbers have increased, as have the volumes and variety of mail they deliver. Generally, the private firms that we studied can be separated into two groups, based on the types of delivery services they offer. One group primarily delivers urgent (overnight) mail and 2-day and 3-day (also called deferred) letters and parcels, all of which generally are referred to as expedited mail. The other group delivers unaddressed advertising circulars, periodicals, or both. Together, these groups compete on a local, national, and international basis for portions of markets previously served largely or exclusively by the Postal Service. The Postal Service’s strongest competitors are five national firms that offer expedited or parcel delivery services, including deferred package delivery services. Only one of these firms was operating in 1970, and most (three of five) entered the overnight package business after the Service suspended the Statutes for extremely urgent letters in 1979. All but one of these five competitors offered deferred package delivery services in 1995. Most were adding other services, such as same-day delivery nationwide, at the time of our review. None of those five firms disclosed detailed operating data by product line or type of service. However, we used publicly available data to compare their services with similar services offered by the Postal Service. As shown in figure 2, three of these five competitors offered or planned to offer the same range of expedited letter and parcel delivery services as the Postal Service offers, except for deferred letters. The Services regulations permit private delivery of deferred letters under the urgency requirement, if the rate for such delivery is twice the applicable First-Class rate, or $3.00, whichever is greater. The five private firms and the Service view deferred (second- and third-day) deliveries as a fast-growing market. As indicated in figure 2, four of the five private firms offered deferred package service, and one of those four also published rates for deferred letter service. If the Statutes were revised or repealed to permit private carriers to deliver deferred letter mail at lower rates than is now required, it appears others could add letters to their existing services with relative ease. The Service’s Priority Mail is most comparable to the deferred services offered by the private delivery firms we reviewed. Priority mail is a heavier weight (more than 11 ounces) subclass of the Service’s First-Class mail and is delivered at $3.00 per piece up to 2 pounds, with rates increasing to $77.09 on the basis of distance (up to 8 zones) and weight (up to 70 pounds). It is among the Service’s fastest growing product lines and one of its most profitable as measured in net revenue per piece. The Service does not know how much Priority Mail is protected by the Statutes because such mail is typically sealed from inspection. Service officials believe that a significant portion and possibly up to 70 percent of the Priority Mail volume is letter mail, thus protected by the Statutes. Unlike its competitors, the Postal Service cannot contract with individual customers to offer negotiated or volume rate discounts. Many of the Service’s customers told us that the Service is less timely and dependable than its competitors. For example, under a contract awarded in 1990 to Federal Express (FedEx), the federal government obtains overnight letter and small package delivery anywhere in the United States, including Alaska, Hawaii, and Puerto Rico. Typically, that carrier’s monthly “on-time” delivery performance for government clients has been slightly better than the Service’s Express Mail performance and much better than the Priority Mail performance. Further, the Federal Express government rate was much less than the Service’s Express Mail rate but higher than the minimum Priority Mail rate. (See table 1.) GSA competitively awarded a new contract, effective August 16, 1996, to replace the 1990 contract. FedEx is again the contractor and is to provide both overnight letter and 2-day package services. The new overnight rates are lower than those shown in table 1. For example, the minimum overnight letter rate dropped from $3.75 per piece to $3.45. The rates for 1- and 2-pound packages, respectively, are $3.50 and $3.57 for overnight service and $3.40 and $3.45 for 2-day service. We estimated that the Service’s five principal competitors accounted for more than 85 percent of all U.S. domestic expedited and parcel delivery revenues, compared to about 15 percent for the Postal Service. These competitors offer delivery services on demand to virtually all domestic U.S. addresses. They also have lobbied Congress to allow more private letter delivery. These firms are not constrained to any great extent by the prohibition on using mailboxes because the items they deliver typically require a signature, are too large to fit into residential mailboxes, or are delivered inside to businesses. However, if Congress allows more private letter delivery, these constraints may become more important because the firms might find the use of mailboxes desirable to improve competitiveness. Postal Service regulations consider advertising matter under 24 pages addressed to a specific person or occupant as letter mail and subject to the Statutes. Even so, 375 firms operate in 47 states and compete in a fast-growing advertising mail market, a subscriber publication delivery market, or both. Mostly small local firms, they are known collectively as the “alternate delivery industry,” and they compete with the Postal Service for delivery of its third-class advertising mail and second-class publications mail. Collectively, these firms represent a significant and growing source of additional competition for private mail delivery. The number of such firms more than tripled (from 108 to 375) from 1982 to 1995. Of the 375 firms, 226 were established between 1988 to 1993. To develop and sustain profitable delivery operations, these firms have increased the volume and variety of items they deliver. Many newspaper publishers have established alternate delivery operations to serve their advertisers better, reduce mail costs, and improve delivery service. In addition, many alternate delivery firms have formed or joined nationwide alliances to market their services more effectively to national publishers or advertisers. On the basis of the limited data available, we estimated that the Postal Service still delivers about 95 and 96 percent of the total volume of all periodical and advertising mail, respectively. However, representatives of several large mailer groups whose members depend heavily on third-class mail indicated that many of their members would be willing to shift some portion of their mail to private carriers if permitted to do so. According to an alternate delivery trade association, its member firms deliver circulars, tabloids, magazines, catalogues, directories, flyers, samples, and other printed materials and advertisements, primarily from businesses to households. The firms we studied target and deliver advertisements to households without using an address or mailbox. Items may be delivered to all households in a particular neighborhood and hung on a door knob or from a hook on a mailbox post, placed on a front porch or in a separate delivery tube, or tossed onto driveways or walkways. Although alternate delivery companies compete with the Postal Service primarily to deliver third-class advertising and second-class publications mail, some First-Class mailers also indicated a willingness to use such firms. However, restrictions on mailbox access make delivery of First-Class mail by private firms less likely than delivery of third-class mail. We judgmentally assessed the relative risk of the Service losing mail volume to private delivery firms, primarily by reviewing private sector delivery capacity and interviewing representatives of delivery organizations representing most of the nation’s business and institutional mailers. We also assessed the likely impact of various mail volume losses on the Service’s postage rates. To do this, we assumed that if the Service experiences any significant loss of mail volume in the future, this would result in higher postage rates, not in reduced services nor increased appropriated funds to the Postal Service. We used revenue, cost, and postage rate data provided to us by the Commission, which it had used for setting the current 32-cent basic letter mail rate and other new postal rates that became effective in January 1995. We supplemented our analysis of historical ratemaking data by using a financial forecasting model that presented estimates for 10 future years. This model was developed by Price Waterhouse LLP (Price Waterhouse) under contract with the Postal Service. In assessing the risk of volume loss, we compared various factors, such as private delivery capacity, mailer and carrier interests, and service performance, for the letter mail classes and subclasses against each other and then judgmentally assigned a risk level ranging from “low” to “high.” We compared the estimated financial effects, e.g, relative change in net revenue and the basic letter rate, of volume losses in each of those categories against each other to similarly characterize the likely impact on the Service. The results of our assessments are shown in table 2. Given current private delivery capacity and prices, Priority Mail letters would be most at risk if the Statutes were to be relaxed. Some First-Class and third-class letters also could be diverted to private delivery, but the percentage of volume losses probably would be much lower than for Priority Mail letters, as figure 3 shows. Most nationwide private carriers we interviewed said they would be ready and willing to deliver letters designated as Priority Mail if the Statutes were relaxed. Given the success of nationwide carriers when competition with the Service has been permitted, it is likely that large numbers of mailers could shift some portion of their Priority Mail letters to private carriers almost immediately if the Statutes were to be changed. In part, this is because mailers perceive the quality of private carriers’ services to be better than that of the Service. If private carriers were not required to charge at least twice the applicable $3 Priority Mail rate for deferred letter delivery, they could offer contract rates that could be more competitive with the Service’s rates. This possibility, when combined with customers’ perceptions of the differences in service quality, creates an even greater risk of Priority Mail volume losses for the Service. Most mailers were satisfied with both First-Class postage rates and service. Our interviews indicated that unless both the Statutes were relaxed and the mailbox access restrictions were lifted, mailers likely would not shift much First-Class mail to private delivery. Most expedited letter and parcel carriers with existing, nationwide delivery capabilities expressed little interest in pursuing residential, First-Class letter mail delivery. However, some local, alternate delivery carriers indicated that they might pursue some First-Class mail deliveries if the Statutes were relaxed. Most third-class mailers we interviewed said they were not fully satisfied with postage rates or the timeliness and dependability of third-class mail delivery. Many said they would likely divert some third-class mail to private firms if the Statutes were relaxed. Similarly, most alternate delivery carriers said that they would likely pursue additional third-class, business-to-household mail deliveries. However, the collective capacity of the alternate delivery industry is limited when compared to the Postal Service’s capacity. As previously indicated, we estimated that in 1995 the Postal Service delivered about 95 percent of all periodical and advertising mail. Even so, the Service faces a lower risk of third-class mail losses to private firms when compared to the possible Priority Mail losses. If some volume losses were to occur, the financial effects on the Service would vary greatly among classes of mail consisting largely of letters. According to Service revenue and cost data, a loss of most or all Priority Mail or a loss of, say, 25 percent of third-class mail would have less effect on postage rates than a 5- to 10-percent loss of First-Class letter volume. The Postal Service’s “margin,” i.e., the difference between the rate charged and the related cost, varies significantly among classes of mail. Because of this difference and the relative volumes of letters in the several classes, the financial effects on the Service of losing a portion of some classes of letter mail could be much greater than for other classes. First-Class letter mail volume is critically important to the Service’s overall revenue and its ability to cover operating costs. Most (88 percent) First-Class mail is lightweight (1 ounce or less) and is relatively easy for the Service to sort with its automated equipment. According to the most recent rate case data (Docket R94-1), First-Class mail revenue was estimated to cover about $32 billion, or 66 percent, of the Service’s total operating cost and $11.7 billion, or 71 percent, of its total institutional cost (overhead) in fiscal year 1995. At our request, the Postal Rate Commission and Price Waterhouse estimated the change in postage rates for all classes and subclasses of mail as well as the current basic letter rate of 32 cents (the postage for a First-Class letter weighing 1 ounce or less). Following our instructions, they assumed various hypothetical percentage losses of First-Class, Priority, and third-class mail volumes, which are largely made up of letters, in fiscal year 1995. We included the basic letter rate for analysis because the 1970 Act requires the Service to provide a uniform rate for at least one class of sealed envelopes, such as First-Class letters. As shown in figure 4, the effects of different First-Class letter volume losses—ranging from 5 percent up to 25 percent—on the Service’s current basic letter mail rate would be more significant than if the same percentage losses occurred for Priority Mail and third-class letters in fiscal year 1995. As indicated in figure 4, a 25-percent loss of First-Class mail volume in fiscal year 1995 would have resulted in the need to increase the 32-cent basic letter rate by 3 cents to 35 cents. By way of comparison, since 1970, the First-Class stamp price has increased 9 times; each increase ranged from 2 to 4 cents. Although we have estimated the effects on the 32-cent stamp, the projected impact on revenue and rates associated with these volume losses could be substantial for all classes. For example, assuming a 25-percent loss each in Priority, First-Class, and third-class mail pieces in 1995, estimated revenue losses could have ranged from $690 million for Priority Mail up to $8.1 billion for First-Class mail. (See table 3.) We assumed for our estimates that the costs specifically attributed by the Service to these lost mail volumes would no longer be incurred. If the Service is able to reduce not only attributable costs but also some institutional costs to offset revenue losses, the effects of any losses of future mail volumes could be less than we have estimated. Conversely, to the extent that the Service is unable to reduce attributable costs enough to offset the related revenue losses, the effect on its rates would be greater than indicated. Price Waterhouse used its model to estimate the effects of varying percentages of mail volume losses on revenue, cost, and postage rates over a 10-year period, 1996 through 2005. For this longer period, the model shows that the relative effects on the basic 32-cent letter rate for mail volume losses in the several classes are similar to those estimates made on the basis of the recent ratemaking data (Docket R94-1). A loss of 25 percent of First-Class mail volume could have a much greater effect on this rate than the same percentage loss of Priority Mail and third-class mail volumes. Specifically, the letter mail rate is estimated to be 41 cents in 2005, according to the model’s baseline assumptions. The 41-cent rate would need to increase to 46 cents in 2005 assuming a 25-percent loss of First-Class mail volume. The rate would need to increase to only 42 cents in 2005, assuming a 25-percent loss of Priority Mail volume or third-class volume. (See fig. 5.) A range of factors relating to the Service’s (1) future mail volumes, (2) cost growth, and (3) service quality and ratemaking initiatives could lead to increases or decreases in future mail volumes. The effects of these factors are unknown, making it difficult to estimate how a change in the Statutes might affect the Service’s revenues and rates. Although the Service has faced competition for many years, it also has experienced substantial growth in overall mail volume. This growth has occurred despite both new communications technology, including facsimiles, desk-top computers, and the Internet’s World Wide Web, as well as suspensions of the Statutes under which private companies now carry most extremely urgent (overnight) domestic and outbound U.S. international mail. Notwithstanding the historical growth in mail volume and whether or not the Statutes remain intact, the Service anticipates losses of some First-Class, Priority, and third-class mail volumes primarily through diversion to electronic communications. According to the Postal Service, six of its seven “product lines”—correspondence and transactions, expedited mail, publications, advertising, standard packages, and international mail—are subject to competition from some form of electronic communication, private message and package delivery firms, or both. Its remaining and only nondelivery product, retail services, also faces increasing competition from private “postal” service firms. Further, Service officials believe that private firms would compete for delivery of large quantities of presorted, prebarcoded First-Class and third-class mail. They said that such mail is more profitable to deliver and, therefore, more attractive to competitors than smaller quantities of mail for which customers do little or no presorting or prebarcoding. Further, the Service believes that if the Statutes are relaxed, presort bureaus and alternate delivery firms would develop alliances or in some other way combine these efforts to prepare and deliver letter mail in competition with the Postal Service. It believes that this development would occur quickly after any change in the Statute. With competition for delivery services increasing, the Service’s employment levels and related labor costs have continued to grow since 1970. We previously reported that employee pay and benefits account for the vast majority of the Service’s costs. Labor costs, representing pay and benefits for nonbargaining executives, managers, and supervisors, and bargaining craft employees, were over 80 percent of the Service’s costs in 1995. That percentage has remained virtually unchanged since 1969, the year before passage of the 1970 Act. This trend has continued even though the Service has invested or plans to invest more than $5 billion in automation equipment since the early 1980s to reduce labor costs. Between April 1993 and November 1995, after the Service had largely completed a downsizing effort, overall postal employment (career and noncareer) grew by about 10 percent, from 782,000 to 855,000 employees. Almost all (98.6 percent) of this 73,000 increase represented career employees, and more than two-thirds (69 percent) represented career clerk and city carrier employment. The vast majority of the Service’s craft employees, those who collect, sort, and deliver mail, are protected from layoffs. This protection could result in more contentious labor contract negotiations and delay the Service in reducing its work force and labor costs to offset the effects of any significant downturn in mail volume. The estimates of mail volume losses shown previously (see figs. 4 and 5) assume that the Service’s attributable costs would decrease in proportion to revenue decreases. Under this assumption, attributable costs would be expected to drop by 1 percent for each 1-percent drop in postal revenue. We believe that this assumption is reasonable, particularly if any significant reduction in the Service’s mail volume were to occur over several years, because in some earlier years the Service was able to make substantial work force reductions through attrition as well as buyouts and other incentives. For example, during the 4 years from May 1989 through April 1993, the Service reduced the career work force from about 774,000 to about 667,000, or almost 14 percent. This reduction largely represented clerks, carriers, mail handlers, and other unionized employees. The reduction occurred during a period when the Service’s cumulative growth in mail volume was about 12 percent. However, to improve mail service, the Service later added employees. Between April 1993 and November 1994, the career work force grew to about 740,000, or almost 11 percent. The Service’s labor costs have grown for various reasons, such as increases in wage rates and overtime as well as growth in total Postal employment. Therefore, the Service may not be able to reduce attributable costs at the same rate that revenue drops. Because of this uncertainty, we arranged through the Service to have Price Waterhouse estimate the basic postage rate (now 32 cents) assuming that attributable labor costs were to be reduced at only one-half the rate of postal revenue losses. Assuming a 25-percent loss of First-Class mail, the estimated increase in the basic letter rate would differ if the Service reduced attributable labor costs (1) at the same rate as revenue decreased and (2) at one-half that rate. In this scenario, the basic letter rate would need to increase from 32 cents in 1996 to 44 cents in 2005 if revenue and labor costs drop at the same rate, or 46 cents if labor costs drop at one-half the rate of revenue loss. (See fig. 6.) Recognizing the need to protect and increase its revenue, the Service has many initiatives under way to enable it to compete more successfully with private firms by improving existing services as well as offering new services. The Service recently began a top-down initiative to improve internal processes that affect customer satisfaction. Its on-time delivery rates for overnight, First-Class mail improved in 1995 and early 1996. In 1995, it established a new unit to compete more aggressively in the international mail markets, where the Service’s postage rates are not subject to the Postal Rate Commission’s approval. The Service and mailers recently began implementing a reclassification of mail that is expected to encourage mailers to prepare mail better and thereby reduce the cost to sort and deliver such mail. The Service has also invested in market and product research with a view toward offering new electronics-related services in the future. Many of these initiatives had just started at the time of our review. It is not yet clear how they may affect the Service’s competitiveness. (See vol. II, ch. 4.) Many postal administrations around the world have mail monopolies to help meet universal letter delivery and other public service obligations. Many of the eight postal administrations we reviewed have been reformed in the past 15 years to give them much greater freedom to operate like private businesses. In a recent study by Price Waterhouse, these postal administrations were described as among the “most progressive.” The governments in these countries have used several different approaches, such as periodic reviews of delivery practices and agreements between the governments and the postal administrations, to ensure the continuation of universal mail service after reform. The scope of our work did not include an evaluation of postal reforms in these countries. Comparisons are difficult to make given the greater size of the U.S. Postal Service. Nevertheless, as we previously testified, postal reforms in other countries do hold some relevance for the United States. A variety of conditions led to the postal reform in other countries. A key reason was increased competition in the delivery and communications markets. In response, governments in most of the eight countries have granted postal administrations greater commercial freedom to meet growing competition. Some foreign postal administrations have taken a range of actions to become more competitive, such as downsizing the work force; increasing productivity; making changes to the postal retail network; and pursuing initiatives to compete in electronic mail, facsimile, electronic bill payment, and other electronic communications services. As competitive pressures increase, some other countries are contemplating further postal reforms, including additional steps to narrow or eliminate postal monopolies. Some of the countries we reviewed have redefined and limited their letter mail monopolies, and Sweden has eliminated the postal monopoly altogether. A common practice was to define the scope of the postal monopoly according to price, weight, urgency, or a combination of these factors. For example, the British postal administration limits the monopoly to letter mail with postage up to 1£. This is in contrast to the definition of a letter in this country, where no measurable characteristics are used except for extremely urgent letters, for which the Service has suspended the Statutes. In 1979, the Service used a price limit as part of the criteria for suspending extremely urgent letters from the Statutes. Specifically, this limit provides that private firms may deliver letters if the price charged is at least twice the Service’s First-Class postage rate or $3.00, whichever is greater. The Service believes this price limit—called the double-postage rule—is necessary to clearly distinguish those letters subject to the Statutes. None of the eight countries had laws that give their postal administrations exclusive access to private mailboxes. However, practical limitations to mailbox access exist in some countries, such as post office boxes and locked mailboxes accessible only to the customer and the postal administration. (See vol. II, ch. 5.) As it now operates, the Service has assumed two distinct roles as (1) a competitor with private delivery firms and (2) a federal entity established to provide universal mail service. Difficult policy issues arise out of these potentially conflicting roles, including (1) the extent to which the Private Express Statutes should restrict competition and (2) whether the Service could continue to provide universal service if the Statutes were relaxed. The Service’s principal response to the growth in private mail delivery has been to avoid mail volume losses by trying to compete more aggressively. It cannot, however, compete head-to-head because it is charged by law with fulfilling a public service mission. Many of the 1970 Act’s provisions, such as those for setting postage rates and resolving collective bargaining disputes, are designed to enable the Service to accomplish its mission as a regulated entity that operates in a noncompetitive environment. As the Service prepares to compete more effectively and pushes for greater freedom to compete, and as private capacity to deliver letters grows, the Statutes are likely to become more controversial. Calls for Congress to consider whether to modify the Statutes, or eventually eliminate them altogether, and make other changes in the 1970 Act are likely to become more intense. A number of considerations are particularly relevant to proposals to change the Statutes. These include (1) their underlying purpose, (2) their relationship to other provisions of the 1970 Act, and (3) the possible consequences for various stakeholders. The Statutes’ purpose is clear. They exist to help ensure that the Postal Service has enough revenue to provide universal letter mail service to the American people. The Service supports the view that it should continue to be the sole provider of letter mail services. However, the validity of this view has been questioned repeatedly since 1973 as the private mail delivery industry emerged and flourished, as economic theory and research evolved concerning the conditions under which monopoly services are supportable, as knowledge about the Service’s operating costs increased, and as postal reform experience in other countries evolved. Consequently, it is not clear whether the underlying economic basis for the Statutes cited by the Postal Service in 1973 and on later occasions remains valid today. Nor is it clear that economic theory alone should guide policy decisions on what roles the Postal Service and private firms should play. Insufficient data exist to measure and evaluate the total cost of mail services to the American public and how such costs might differ if multiple firms provided such services. The Statutes are but one of many interrelated provisions of the 1970 Act and other federal laws that, together, are intended to help ensure that the Service remains a viable entity for providing mail services to all communities and that the public interest is served. Other provisions require that the Service’s costs and rates be reviewed by the Postal Rate Commission. Proposed changes in postal rates can be contentious and can take up to 10 months to resolve. As we have reported, two studies completed in fiscal year 1992 showed that this process possibly could be shortened and streamlined to be more responsive to the Service’s current needs. However, the criteria and process for setting postage rates prescribed in the 1970 Act are relevant and important to achieving other objectives of the act. Two such objectives, which are fundamental to the Service’s public service mission, are (1) that each class of mail bear only the direct and indirect postal costs attributable to that class and not others, and (2) that parties affected by changes in postage rates be given an opportunity to comment on such changes. It is impossible to predict with certainty what the consequences might be should the Statutes be relaxed. The best available data indicate that a sweeping change of the Statutes that opens the Service’s First-Class mail services to competition could affect postage revenue and rates severely. This prognosis, however, is subject to some critical assumptions, such as (1) the Service would not improve its service performance sufficiently to avoid large losses of mail volume to private firms and (2) firms now competing or those deciding to compete in the future would offer better prices or services than the Service. Another assumption is that the consequences of a change in the Statutes would flow not only to the Service but to all stakeholders, including the American public, mailers, and competitors. Assuming a continued commitment to providing traditional mail service in all communities and to providing a reasonable and uniform First-Class postage rate, a key question is whether these goals are advanced by protecting the Service’s mail volumes by law. If the Service could meet these public service obligations and if increased competition were permitted, the public could be free to choose less costly or higher quality services. It is important to consider how incremental changes might affect all stakeholders in determining whether to relax the Statutes. There has been a general pattern among the other countries we reviewed of continuing to require universal service but also allowing greater competition for letter mail delivery. The scope of the monopoly as it relates to the definition of a letter is more clearly delineated in some other countries than in the United States. As we have stated, most other countries employ some combination of price and weight to define monopoly-protected letters, whereas the Service’s definition relies mainly on content, except for extremely urgent letters. The Service’s 1979 regulations allow extremely urgent letters to be delivered by private firms if the price change is above a minimum dollar threshold. According to the Service, this threshold is necessary to provide clarity and thereby facilitate compliance. Neither the statutes nor the Service’s regulations restricting private delivery of other letters include similar dollar or weight characteristics. Our review focused primarily on events and developments surrounding the Statutes during the approximately 25 years since the Postal Reorganization of 1970, which set up the U.S. Postal Service. We reviewed the legislative history of the Statutes and related laws and their implementation through Postal Service regulations; interviewed Postal Service officials at headquarters offices and field locations in Atlanta, GA; Chicago, IL; Dallas, TX; San Francisco, CA; and Jacksonville, FL; and reviewed relevant Postal Service data and reports. We examined records summarizing Postal Inspection Service audits of mailers’ compliance with the Statutes and interviewed representatives of selected companies in Georgia and Alabama audited by the Postal Inspection Service. Our work on the development of the private sector message and package delivery industry included interviews with representatives of private delivery firms, major trade associations and mailer groups, knowledgeable industry observers, and Postal Service and other government officials. We also reviewed available literature and analyzed relevant postal service and industry data. To analyze the possible financial effects on the Service’s revenue, costs, and postage rates if the Statutes are relaxed, we estimated the relative risk of the Service’s letter mail stream, by class and subclass, primarily on the basis of interviews with current Service mailers and competitors. We also estimated the degree to which the Service’s revenue and postage rates might have been affected if its estimated fiscal year 1995 letter mail volumes, by class and subclass, had been reduced by various percentages. For these estimates, we used Postal Service data that it had provided to the Commission in early 1994 to request new postage rates, including the 32-cent basic letter rate that became effective in January 1995. We also arranged with the Postal Service and its management consulting firm, Price Waterhouse, to develop estimates for us of possible changes in postage rates assuming that the Service’s letter mail volumes were to be reduced by various percentages in future years. We obtained information on postal administrations in other countries from several reports, including a February 1995 report prepared by Price Waterhouse, and interviewed officials of several other postal administrations; visited the Canada Post Corporation; and reviewed annual reports and various other documents provided by foreign postal administrations. Our review was conducted primarily between May 1994 and February 1996 in accordance with generally accepted government auditing standards. (See vol. II, ch. 1.) We requested comments on a draft of volumes I and II of this report from the U.S. Postal Service and the Postal Rate Commission. The Postal Service responded in a letter, with enclosure, dated August 29, 1996. The letter is reprinted in appendix I of this volume, and our comments on the letter itself are provided below. Because the enclosure to the letter raised technical matters related to the content of volume II, the letter with the enclosure is also reprinted in appendix II of volume II, and our detailed comments on those technical matters are provided in chapter 6 of volume II. The Commission did not provide written comments. However, Commission officials suggested several changes to volumes I and II of the draft to improve technical accuracy and completeness of the report. We incorporated those changes where appropriate. The Postal Service said that our report presents credible information on the purpose and application of the Private Express Statutes and related regulations. However, the Service expressed concern that we had ventured into speculating about the possible financial effects of eliminating or substantially relaxing the Statutes. The Service believed that in so doing we seriously underestimated the magnitude of revenue losses that would occur across all mail classes if Congress removes the Statutes. The Service said that such losses could harm the Service’s financial health and potentially undermine the historic postal reform legislation currently being considered by Congress. The Service said that it is difficult to forecast the Service’s financial situation 5 or 10 years into the future and that using different assumptions produces different results. We did not attempt to make long-range forecasts and predict future financial effects of changing the Statutes. Rather, our purpose was to show the sensitivity of the Service’s revenue, costs, and postage rates to various “what if” assumptions about changes in mail volume by class and subclass. Our principal method of examining these possible effects was to arrange with the Postal Rate Commission to use the same baseline data previously used by the Service and the Commission for estimating revenue, costs, and postage rates for planning and ratemaking purposes. At our request, the Commission developed a broad range of estimated effects on the Service’s revenue, costs, and postage rates, using various assumptions we provided about changes in mail volumes for letter mail classes and subclasses. It is important to note that all other assumptions and data used in examining these possible effects were those used by the Service and the Commission for establishing the postage rates that became effective in January 1995. As such, these possible effects are based on the Service’s official volume, revenue, and cost estimates for 1 year (fiscal year 1995) that it presented to the Commission in March 1994. To supplement the results of the Commission’s work, we arranged for additional estimates to be provided by Price Waterhouse, using a financial model that it had developed for the Postal Service. This model included the Service’s baseline estimates of such variables as mail volumes and revenue for 10 future years, 1996 to 2005. Using its model, Price Waterhouse showed how the Service’s baseline estimates might change each year if the Service were to lose specified percentages of its letter mail volume in each letter mail class and subclass. Thus, we did not attempt to predict what the Postal Service’s financial condition would be in 5 or 10 years if the Private Express Statutes were removed or relaxed. Given the Postal Service’s comments, however, we have further explained in our report the assumptions we identified for the analysis, how the estimates were derived, and how we intended for them to be used. We agree that as the Postal Service noted, other assumptions could lead to different results. Additionally, the Postal Service expressed concern about how removing the Statutes might affect universal delivery service at uniform rates. The Service said that alternate delivery firms are interested in serving the most profitable areas and not expensive-to-serve areas. The Service also said that (1) the Statutes provide the financial underpinning for universal service, (2) removing the Statutes could unintentionally result in the end of universal and affordable mail service as the American people have known it, and (3) Congress should proceed “with great caution” when considering changes to the Statutes. We agree that the Statues have provided the financial underpinning for universal service. We identified the longstanding public policy of providing universal mail service at uniform rates for some letter mail as one of several key issues Congress needs to consider in assessing the desirability of changing the Statutes. However, we also point out and discuss many other issues that are also relevant to postal reform. For example, the Service’s net revenue and postage rates and, in turn, universal service could be affected by many factors—such as how Congress might change the Statutes, how the alternate delivery firms and other competitors might respond, what mail volume the Service might lose, and whether the Service can improve service quality and control operating costs. In summary, it is unclear as to exactly how removing or relaxing the Statutes might affect private mail delivery. However, we believe that our report and the Service’s comments provide Congress with much useful information for assessing the desirability of changing the Statutes, including assessing the changes proposed in the Postal Reform Act of 1996. As agreed with the Subcommittee and unless you announce its contents sooner, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Postmaster General, to other Postal oversight committees in Congress, and to other interested parties. Copies will also be made available to others upon request. The results of our review are presented in greater detail in volume II of this report. A list of major contributors is included in appendix IV of volume II. If you or your staff have any questions about this report, please contact me on (202) 512-8387 or James T. Campbell, Assistant Director, on (202) 512-5972. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the effects of certain statutory restrictions on private letter mail delivery. GAO found that: (1) the U.S. Postal Service (USPS) believes that private express statutes are necessary to protect its mail volume and revenue base; (2) it is difficult to enforce the statutes due to complaints from mailers and competitors; (3) USPS has not initiated any audits to determine mailer compliance since 1994; (4) USPS has issued regulations suspending the statutes to allow private firms to deliver urgent and international mail; (5) there is considerable debate as to whether USPS is the only organization capable of providing efficient and economical mail service; (6) USPS competitors account for more than 85 percent of all U.S. domestic expedited letter and parcel delivery revenues; (7) four USPS competitors could deliver USPS priority mail if given the opportunity; (8) private delivery firms are increasing their capacity to deliver USPS third-class mail by participating in national alliances that broaden their delivery networks; (9) several countries encourage private delivery firms to enter into agreements with postal administrations to increase mail delivery competition; and (10) USPS needs to determine how changes in private express statutes will affect universal mail service and postal rates. |
There are over 3,400 state and local pension systems in the United States, according to the most recent Census Bureau Survey of State and Local Public-Employee Retirement Systems. Most large plans are state plans, and more state and local employees are covered by state- administered plans than by locally-administered plans (about 24 million members and beneficiaries compared with about 3 million). However, there are more local government employees than state government employees (about 14 million compared with about 5 million), and while local governments sometimes participate in plans administered by states, the local governments generally retain responsibility for contributing the employer’s share of funding to the plans for their employees. As a result, local governments contribute more to pension plans each state fiscal year, overall, than do state governments (see fig. 1). Pension plans are generally characterized as either defined benefit or defined contribution plans. Unlike in the private sector, defined benefit plans provide primary pension benefits for most state and local government workers. About 78 percent of state and local employees participated in defined benefit plans in 2011, compared with only 18 percent of private sector employees. In a defined benefit plan, the amount of the benefit payment is determined by a formula (in the public sector, the formula is typically based on the retiree’s years of service and final average salary, and is most often provided as a lifetime annuity). However, unlike private sector employees with defined benefit plans, state and local government employees generally contribute to their defined benefit plans. A few states offer defined contribution or other types of retirement plans as the primary retirement plan.contribution plan, the key determinants of the benefit amount are the member’s and employer’s contribution rates, and the rate of return achieved on the investments in an individual’s account over time. Alternatively, some states have adopted hybrid approaches that combine components of both defined benefit and defined contribution plans. Also unlike in the private sector, many state and local employees are not covered by Social Security. About 6.4 million, or over one-fourth, of state and local government employees are not eligible to receive Social Security benefits based on their government earnings and do not pay Social Security taxes on earnings from their government occupations.a result, employer-provided pension benefits for non-covered employees are generally higher than for employees covered by Social Security, and employee and employer contributions are higher as well. As The federal government has not imposed the same funding and reporting requirements on state and local pensions as it has on private sector pension plans. State and local government pension plans are not covered by most of the substantive requirements under the Employee Retirement Income Security Act of 1974 (ERISA)—requirements which apply to most private employer benefit plans. Nor are they insured by the Pension Benefit Guaranty Corporation as private plans are. Federal law generally does not require state and local governments to prefund or report on the funded status of pension plans. However, in order for participants to receive preferential tax treatment (that is, for employee contributions and investment earnings to be tax-deferred), state and local pensions must comply with certain requirements of the Internal Revenue Code. State and local governments also follow different standards than the private sector for accounting and financial reporting. The Governmental Accounting Standards Board (GASB), an independent organization, has been recognized by governments, the accounting industry, and the capital markets as the official source of generally accepted accounting principles (GAAP) for U.S. state and local governments. GASB’s standards are not federal laws or regulations and GASB does not have enforcement authority. However, compliance with its standards is enforced through laws of some individual states and the audit process, whereby auditors render opinions on the fair presentation of state and local governments’ financial statements in conformity with GAAP. GASB’s standards require reporting financial information on pensions, such as the annual pension cost, contributions actually made to the plan, and the ratio of assets to liabilities. In addition, actuarial standards of practice are promulgated by the Actuarial Standards Board. These standards are designed to provide practicing actuaries with a basis for assuring that their work will conform to appropriate practices and to assure the public that actuaries are professionally accountable (see app. II for information on recently proposed changes to GASB and Actuarial Standards Board standards). Some municipal bond analysts have reported concerns about state and local governments’ creditworthiness in light of the recent economic downturn and continuing pension obligations. In 2008 and 2010, respectively, the Securities and Exchange Commission took enforcement actions against the city of San Diego and the state of New Jersey for misrepresenting the financial condition of their pension funds in information provided to investors. Although pension plans suffered significant investment losses from the recent economic downturn, which was the most serious since the Great Depression, most state and local government plans currently have assets sufficient to cover their benefit commitments for a decade or more. Nevertheless, most plans have experienced a growing gap between actuarial assets and liabilities over the past decade, meaning that higher contributions from government sponsors are needed to maintain funds on an actuarially based path toward sustainability. In spite of budget pressures through the recession, most plans continued to receive prerecession contribution levels on an actuarial basis from their sponsors, with most plans contributing their full actuarial level. However, there were some notable exceptions, and these plans continued to receive lower contribution payments. State and local governments experienced declining revenues and growing expenses on other fronts, and growing budget pressures will continue to challenge their ability to provide adequate contributions to help sustain their pension funds. The recent economic downturn resulted in state and local pension plans suffering significant investment losses. Positive investment returns are an important source of funds for pension plans, and have historically generated more than half of state and local pension fund increases. However, rather than adding to plans’ assets, investments lost more than $672 billion during fiscal years 2008 and 2009, based upon Census Bureau figures for the sector (see fig. 2). Since 2009, improvements in investment earnings have helped plans recover some of these losses, as evidenced by more recent Census Bureau data on large plans. More importantly, however, public pension plans have built up assets over many years through prefunding (that is, employer and member contributions) and through the accumulation of associated investment returns. Assessing the financial condition across all plans using actuarially determined figures (such as a plan’s funded ratio) is challenging, in part, because of the various methods and assumptions used by these plans (see app. II). One alternative measure of financial condition across pension plans, although not optimal when assessing the financial health of a single plan, is the ratio of fund assets to annual expenditures. Fund assets represent the dollar amount a plan has built up, while annual expenditures ultimately determine how quickly assets are spent down. Alternatively, when assessing the financial condition of an individual defined benefit plan, various approaches are used, and looking at multiple factors is especially useful in providing a more complete picture of a plan’s financial condition. In addition to the level of funding (level of plan assets relative to plan liabilities), assessments of a plan’s financial viability by rating agencies and others may take into consideration the influence of the plan sponsor, the plan’s underlying methods and assumptions, and efforts to manage risk (see table 2). As illustrated in figure 3, an analysis of historical Census Bureau data on state and local government pensions shows that the ratio of fund assets to annual expenditures fell during the stock market downturn related to the oil crisis of the early 1970s, but eventually recovered and reached its peak in 2000, driven by strong investment results throughout the 1990s. Since that peak, both the market downturn in the early 2000s and sustained economic weakness beginning in 2008 drove the ratio of sector-wide assets relative to expenditures lower. Overall, these data show that the aggregate ratio of fund assets to annual expenditures, as of 2009, is lower, but in line with historical norms dating back to 1957. At the same time, data on individual plans indicate that this measure can vary considerably across plans. As illustrated in figure 4, data on large plans for fiscal year 2009 show that their fund assets relative to annual expenditures varied widely, with ratios ranging from less than 5 to greater than 20. From the early years of prefunding of pension plans, sector-wide plan contributions outpaced plan expenditures, but by the early 1990s, expenditures began outpacing contributions. This trend was predictable. As public plans matured, they began to have greater proportions of retirees to active workers. As such, payments to retirees increased relative to plan contributions and, as a result, in more recent years, sector-wide expenditures have outpaced contributions. Nevertheless, given the asset levels of most state and local government plans and the pace of expenditures relative to contributions, most plans can be expected to cover their commitments for the near future with their existing assets. For example, even if these plans received no more contributions or investment returns, most large plans would not exhaust their assets for a decade or longer, since they hold assets at least 10 times their annual expenditures. While state and local pension plans have sufficient assets to meet their obligations in the near future, an examination of actuarially determined funded ratios among large plans shows a growing gap between their assets and liabilities. This ratio is important since, on a plan-by-plan basis, a plan’s funded ratio shows the plan’s funding progress and is part of the basis for determining contribution levels necessary for fund sustainability. As a result of recent market declines and other reasons— such as sponsors’ failure to keep pace with their actuarially required contributions and benefit increases during the early 2000s—funded ratios have trended lower. Data compiled on large plans indicate that the funded ratios for these plans, in aggregate, have fallen over the past decade from over 100 percent in fiscal year 2001 to 75.6 percent in fiscal year 2010.(See fig. 5.) Several factors have contributed to the growing gap between plans’ actuarial assets and liabilities. For example, large pension funds generally assumed investment returns ranging from 6 to 9 percent throughout the 2000s, including assuming returns of approximately 8 percent, on average, in 2009, despite the declines in the stock market during this time. Pension portfolios maintain other assets beside equities; however, gains in these other asset classes did not make up the amounts lost by negative equity performance over this period. It is important to note that the period from 2008 to 2009 was an extraordinary low period for returns on investments in the financial history of the United States. Benefit increases were another important reason for the growing gap between assets and liabilities over the past decade. These increases were enacted early in the decade when the funded status of plans was strong. For example, 11 states increased pension benefits in 2001 according to reports from the National Conference of State Legislatures. Among the sites included in our review, Pennsylvania enacted legislation in 2001 that increased the pension benefit multiplier from 2 to 2.5 percent—an increase of 25 percent. This higher benefit formula applied to both new and currently employed pension plan members (covering state employees and local public school employees). This was also the case in California and Colorado where pension benefit increases in the late 1990s and early in the 2000s helped drive liabilities higher. Lower funded ratios generally mean higher annual contribution rates are necessary to help sustain pension plans. Thus, as funded ratios trended lower over the past decade, sponsor contribution rates trended higher. For example, from 2002 to 2009, the median government sponsor contribution rates among large plans rose as a percentage of payroll, while employee contribution levels remained the same through this same period (see table 3). In spite of budget pressures through the 2007-2009 recession, most government sponsors of large plans continued to contribute about the same percentage of their annual required contribution (ARC) levels determined to be needed to help sustain their fund assets. From 2005 until 2009, just under two-thirds of large plan sponsors continued to pay at least 90 percent of their ARC payments.between what large plans would have received, in aggregate, if they received their full ARC payments is significant. For example, in 2009, large plans sponsors contributed approximately $63.9 billion in aggregate, $10.7 billion less than if they had made their full ARC payments. However, the gap in dollars In addition, the distribution of plan sponsor contribution levels in 2010, illustrated in figure 6, shows that about half the sponsors of large plans contributed their full 100 percent or more of ARC payments, while others contributed much less. Going forward, among the eight selected states and eight selected local jurisdictions we reviewed, several officials told us that they expected significant increases in their employer contribution rates as a percentage of payroll. For instance, officials from the Employees’ Retirement System of Georgia expect their contribution rates to nearly double over the next 5 years (from 10.5 to 20 percent of payroll) to help maintain a sustainable path for their defined benefit plans. Officials from the Utah Retirement Systems expect rates to increase from approximately 13 to 20 percent of payroll. Fiscal pressures on state and local governments’ budgets add to the challenges faced by plan sponsors and their ability to make adequate contributions to their pension plans. The economic downturn and slow recovery led to budget shortfalls in the state and local sectors because of declining tax revenues and increased spending on economic safety net programs such as health care and social services. According to survey data from the National Association of State Budget Officers (NASBO), from fiscal years 2009 through 2011, states reported solving nearly $230 billion in gaps between projected spending and revenue levels. Local governments have also struggled with their budgets. For example, the National League of Cities reported that if all city budgets were totaled together, they would likely face a combined estimated shortfall of anywhere from $56 billion to $83 billion from 2010 to 2012. As a result, higher pension contributions have been needed at the same time state and local governments have faced added pressures to balance their budgets. Even in normal economic times, state and local governments seek consistency in program spending areas, meaning that large year-to-year increases in pension contribution levels can strain budgets. Since some of these governments are subject to balanced budget requirements, annual pension contributions, which averaged around 4 percent of state and local budgets in fiscal year 2008, must compete with other pressing needs, even though pension costs are obligations that governments must eventually pay. Although tax revenues are slowly recovering to pre-2008 levels, going forward, long-term budget issues will likely continue to stress state and local governments and their ability to fund their pension programs. GAO has reported that state and local governments face fiscal challenges that will grow over time, and with current policies in place, the sector’s fiscal health is projected to decline steadily through 2060. driving this decline is the projected growth in health-related costs. For example, GAO simulations show that the sector’s health-related costs will be about 3.7 percent of gross domestic product in 2010, but grow to 8.3 percent by 2060.pension contribution rates, have spurred many states and localities to take action to reduce pension costs and improve the future sustainability of their plans. GAO, State and Local Governments' Fiscal Outlook: April 2011 Update, GAO-11-495SP (Washington, D.C.: Apr. 6, 2011). sustainability long term (see fig. 7). Based on our tabulation of state legislative changes reported annually by NCSL, we found that the majority of states have modified their existing defined benefit systems to reduce member benefits, lowering future liabilities. Half of states have increased required member (that is, employee) contributions, shifting costs to employees. Only a few states have adopted primary plans with defined contribution components, which reduce plan sponsors’ investment risk by shifting it to employees. Some states and localities have also taken action to lower pension contributions in the short term by changing actuarial methods, and a few have issued pension bonds to finance their contributions or to lower their costs by reducing the gap between plan assets and liabilities. In general, we found that states and localities often package several of these different pension changes together. These packaged changes can have varying effects on employer contributions, plan sustainability, and employees’ retirement security. Expanding the time period for calculating final average salaries generally reduces pension benefits by averaging in lower employee salaries. to current retirees. In the case of Colorado, the state recently reduced postretirement COLAs for future, current, and retired members. According to plan documents, most plan members, who are not covered by Social Security, had previously been guaranteed an annual postretirement COLA of 3.5 percent, but the recent legislation eliminated the COLA for 2010 and capped future COLAs at 2 percent. As discussed later, Illinois took the more unusual step of taking advance credit for benefit reductions that apply only to new employees. reduce pension liabilities and consequently lower actuarially required sponsor contributions. From the employee perspective, these changes can mean that those in the new tier or plan will realize lower future benefits than their coworkers who continue to participate in the old plan. This could affect employee recruitment and retention over the long term, but some pension officials we spoke with expected any short-term impacts to be minimal. Among the pension plans included in our review, we found that six states and two localities had reduced the benefits in some of their largest defined benefit plans. For example, in 2011, Denver, Colorado, reduced retirement benefits for new members of the Denver Employees Retirement Plan hired after July 1, 2011. Denver reversed previous benefit enhancements enacted over prior decades by increasing the period used for calculating final average salary (the basis for benefit calculations) and raising the minimum retirement age from 55 to 60, among other changes. Over the next 30 years, these changes are expected to reduce the city’s pension contributions by 1.65 percent of payroll. According to plan documents, the changes enacted are expected to reduce pension benefits for new employees and will require some members to work longer to receive full pension benefits. Nevertheless, city officials do not expect any of the recent changes to significantly affect employee recruitment and retention. Twenty-five states have taken action since 2008 to increase member contributions, shifting pension costs to employees, according to NCSL reports. States generally have more leeway to adjust member contribution rates as compared with pension benefits for existing members. As a result, more states have increased contributions for some active employees rather than limiting the increases to future employees. Some states are also requiring members to contribute to their pensions for the first time. Among the states we reviewed, Virginia and Missouri recently required some new plan members to contribute to the retirement plan (5 percent in Virginia and 4 percent in Missouri), whereas members did not previously contribute. Increases in member contributions reduce the actuarially required amounts plan sponsors need to contribute to their pension systems. As a result, these changes often do not affect the amount of revenue flowing into pension systems, but rather represent a shifting of pension cost from employers to plan members. Member contributions are a relatively stable source of pension revenue, since they are less susceptible to market conditions than investment returns, and less susceptible to budgetary and political pressures than employer contributions. However, member contributions are susceptible to declines in the size of the workforce and are often refunded to employees if they separate from their employer before becoming eligible to receive benefits. Among the jurisdictions included in our review, we found that four states and one locality had increased the member contributions in some of their largest defined benefit plans. For example, in the case of Norfolk, Virginia, the city began requiring new members to contribute 5 percent to the Employees’ Retirement System in 2010, whereas current members do not contribute. As a result of this change, the city’s employer contributions will decline as more contributing members join the system. City officials said that new employees had already contributed over $140,000 to the system in the first year. This increase in member contributions will reduce employee compensation and could affect recruitment and retention, particularly since the change will be immediately reflected in lower paychecks. However, city officials did not expect the changes to have a significant impact on employee recruitment and retention, since the Virginia Retirement System had recently implemented similar changes for state employees. Although a majority of states have continued to use traditional defined benefit plans as their primary pension system, our analysis of NCSL annual reports on recent pension legislation found that, since 2008, three states—Georgia, Michigan, and Utah—have implemented hybrid approaches as primary plans for large groups of employees, shifting some investment risk to new employees. Two of the eight localities we reviewed have also switched to hybrid approaches since 2008: Cobb County, Georgia, and Bountiful, Utah (which participates in Utah’s state- administered retirement system). Unlike in a defined benefit plan, which provides benefits based on a set formula,component of a hybrid approach, the key determinants of the benefit amount are the employee’s and employer’s contribution rates, and the rate of return achieved on the amounts contributed to an individual’s account over time. Defined contribution and hybrid approaches reduce the impact of market volatility on plan funding and employer contributions, but are riskier for plan members. Whereas under a defined benefit system, employer contributions generally rise and fall depending in part on investment returns, plan sponsors of a defined contribution system contribute a set amount regardless of investment returns. This reduces the risk facing the pension system as well as the state or locality sponsoring the plan. However, switching to a defined contribution plan can involve additional short-term costs for plan sponsors, since contributions from new employees go toward their own private accounts rather than paying off existing unfunded liabilities of the defined benefit plan once it is closed to new employees. From the member’s perspective, building up retirement savings in defined contribution plans rests on factors that are, to some degree, outside of the control of the individual worker. Most notable among these is the market return on plan assets, which, among other factors, determines future retirement benefits. On the one hand, this exposure to market risk increases members’ financial uncertainty, since retirement benefits rise and fall based on investment returns. On the other hand, defined contribution plans are often viewed as more portable than defined benefit plans, as employees own their accounts individually and can generally take their balances with them—including both member and employer contributions—when they leave government employment, as long as they are vested. In contrast, employees in defined benefit plans can generally take their member contributions, if any, with them if they leave government employment, but not the employer’s contributions. A 401(k) plan is a type of defined contribution plan that permits employees to defer a portion of their pay to a qualified tax-deferred plan. State and local government defined contribution plans are typically 457(b) plans. The Tax Reform Act of 1986 prohibited state and local governments from establishing any new 401(k) plans after May 6, 1986, but existing plans were allowed to continue. Pub. L. No. 99-514, § 1116(b)(3), 100 Stat. 2085, 2455. 401(k) component of the hybrid approach were contributing only the default 1 percent, according to plan officials. At this level, employees may struggle to build adequate retirement savings. Plan officials said they have tried to encourage members to contribute more to their 401(k) plans, but these efforts have not been successful. To address rising actuarially required pension contribution levels and budget pressures, some states and localities have taken actions to limit employer contributions in the short term or refinance their contributions. These strategies included changing actuarial methods or issuing pension bonds to supplement other sources of financing for pension plans. Such strategies help plan sponsors manage their contributions in the near term, but may increase their future costs. Fewer nationwide data are available on the use of these strategies; however, we were able to document their use across several of our selected pension plans. Some state and local governments have limited or deferred their pension contributions in the short term by making actuarial changes. It is difficult to determine the recent prevalence of these changes nationwide; however, five of the eight states and one of the localities we reviewed had implemented actuarial changes to reduce their pension contributions since 2008. The changes included expanding amortization periods (the number of years allotted to pay off unfunded liabilities) and adjusting smoothing techniques (methods for reducing the effect of market volatility on pension contributions by averaging asset values over multiple years). For example, Utah reported that it increased the amortization for the state’s retirement system from 20 years to 25 years to extend the length of time for paying down unfunded pension liabilities. Alternatively, Illinois reported that it recently required all Illinois state retirement systems to switch from a market valuation with no smoothing to a 5-year smoothing method for calculating actuarial assets and employer contributions to lessen the immediate impact of fiscal year 2009 investment losses on contributions. Some state and local governments, while not formally changing their underlying actuarial methods, have simply deferred or capped their pension contributions. Two states and one locality we reviewed limited contributions in the short term by capping increases in employer contributions or by simply postponing otherwise scheduled contributions. Capping increases in contributions allowed these states and this locality to temporarily suppress the increases that would otherwise have been required given 2008 investment losses and other factors. In the case of the Pennsylvania, the state addressed an expected 19 percent increase in actuarially required contributions to the State Employees’ Retirement System by capping annual increases at 3 percent for 2012, 3.5 percent for 2013, and 4.5 percent thereafter. Similarly, the Illinois Municipal Retirement Fund allowed local plan sponsors to cap contribution increases at 10 percent starting in 2010. Although adjusting plan funding produced some short-term savings for state and local budgets, it also increased the unfunded liabilities of the pension system and will necessitate larger contributions in the future. In the case of Philadelphia, the city used its authority under state law to partially defer pension payments by $150 million in fiscal year 2010 and $90 million in 2011. While these deferrals helped the city reduce its contributions in the short term, state law requires that the money be repaid with interest by fiscal year 2014. The city has adopted a temporary 1 percent increase in the sales tax to help cover these future costs. Issuing pension obligation bonds (POB) is another funding strategy, although relatively few states and localities have used it, as it can expose plan sponsors to additional market risk. POBs are taxable general obligation bonds that provide a one-time cash infusion into the pension system. They convert a current pension obligation into a long-term, fixed obligation of the government issuing the bond. POBs are issued for generally one of two purposes: either to provide temporary budget relief by financing a plan sponsor’s actuarially required contribution for a single year, or as part of a longer-term strategy for paying off a plan’s unfunded liability. Using POBs to pay off all or a portion of a plan’s unfunded liability potentially reduces future actuarially required pension contributions, but requires plan sponsors to make annual debt service payments on the POBs instead. We analyzed data on state and local government bond issuances nationwide and found that other than the states of Illinois and Connecticut, and the Chicago Transit Authority, most state and local governments have not issued sizable POBs over the past 6 years (see fig. 8). This type of pension funding has been limited, with only 25 or fewer POB issuances in each of the last 6 years. The total amount of POBs issued in a single year has not exceeded more than 1 percent of total assets in state and local pension plans. These transactions involve significant risks for government entities because investment returns on the bond proceeds can be volatile and lower than the interest rate on the bonds. In these cases, POBs can leave plan sponsors worse off than they were before, juggling debt service payments on the POBs in addition to their annual pension contributions. In a recent brief, the Center for State and Local Government Excellence reported that by mid-2009, most POBs issued since 1992 were a net drain on government revenues. In light of these concerns, officials in Pennsylvania noted that the state had enacted legislation in 2010 prohibiting the use of POBs. Two of the pension systems included in our review—Illinois and Sonoma County, California—have issued POBs since 2008. Illinois, which is discussed at length below, has been the largest single issuer in recent years, issuing over $7 billion in POBs since 2010. In the case of Sonoma County, California, the county issued $289 million of POBs in 2010 with maturities ranging up to 19 years. County officials explained that the POBs were financially advantageous because they had an average interest rate of just under 6 percent, which is lower than the 8 percent expected return on the pension fund investments at the time the bonds were issued. The difference between the POB interest rates and the assumed rate of return is projected to save the county $93 million in contributions over the life of the bonds. significantly. The POBs could increase the county’s future expenses if actual investment returns fall below 6 percent. Over the prior 10-year period ending in 2010, the retirement system’s average investment rate of return was 4.1 percent, but returns over the prior 20-year period have been significantly higher at 8.4 percent. States and localities often packaged multiple pension changes together. For example, our analysis of the NCSL reports revealed that 23 states have both increased employee contributions and reduced member benefits. Each change made, and the interplay among the changes, contributes to various impacts on plan sponsors, pension sustainability, and plan members. The following examples demonstrate some of the ways states have packaged these changes, and the varying impacts that are expected as a result. The county pension system subsequently lowered its assumed rate of return to 7.75 percent. This action, along with any future actuarial changes, would affect the expected savings from the POBs. Missouri is an example of a state that packaged increases in member contributions with reductions in benefits to narrow the gap between plan assets and liabilities. For new general members of the Missouri State Employees Retirement System and the Missouri Department of Transportation and Highway Patrol Employees’ Retirement System, the state increased the normal retirement age from 62 to 67, expanded the vesting period from 5 to 10 years, and required members to contribute 4 percent of pay to the pension system, although current members do not contribute. These changes are expected to lower the state’s contributions to the system over the long run by more than 5 percent of payroll, but the initial savings are much smaller. In fiscal year 2012, the benefit and contribution changes are expected to reduce the state’s contribution to its largest plan by less than 1 percent of payroll, since there will be only a small number of newly hired members in the system. However, by fiscal year 2018, employees covered under the reduced benefit structure are expected to account for over half of payroll, further reducing the state’s annual contributions. Plan officials said these changes could pose issues for recruitment and retention, although the influence of retirement plan details will vary based on individual circumstances. They also noted that the changes could affect employee morale, since new employees will have to work longer to qualify for benefits and the required pension contributions will reduce their compensation. In the case of Pennsylvania, the state passed a package of pension changes in 2010 that offset a short-term funding cap with long-term benefit reductions to limit the impact on the plan’s funded status. For the State Employees’ Retirement System, the most significant funding change was a statutory cap on employer contribution rate increases. The legislation addressed an expected 19 percent increase in actuarially required contributions by capping any increases at 3 percent for fiscal year 2011/2012, 3.5 percent for fiscal year 2012/2013, and 4.5 percent thereafter. In the short term, the caps effectively reduced the state’s expected contributions over the next 4 years by $2.5 billion. But in the long term, the caps, along with other actuarial changes, are expected to increase the state’s pension contributions to the system by $7 billion over the next 32 years. To help offset the additional long-term costs, Pennsylvania enacted pension legislation calling for various benefit reductions for future employees. For example, the state reduced the benefit multiplier for future employees from 2.5 to 2 percent (with an option for members to maintain the 2.5 multiplier by paying a higher member contribution rate); increased the normal retirement age from 60 to 65; and expanded the vesting period from 5 to 10 years. These benefit reductions will reduce future liabilities and are expected to lower the state’s pension costs by almost $8.5 billion over the next 32 years, for an estimated net savings of $1.5 billion over the cost of the caps and other funding adjustments. Both pension and budget officials said these changes will help the state better manage rising pension contributions in the short term, but the overall savings from the legislative package are relatively modest over the long term. Meanwhile, the changes will require new employees to work longer for lower benefits and will leave more employees with no benefit at all. Plan officials said it is too early to tell if this will affect employee recruitment and retention. In the case of Illinois, the state combined use of POBs, actuarial changes, and benefit reductions to manage the state’s pension costs. The state issued $3.5 billion of POBs in 2010 and $3.7 billion in 2011 with maturities up to 8 years and used the proceeds to fund the state’s annual contributions to various pension systems. An Illinois budget official explained that issuing the POBs helped the state avoid making additional spending cuts to other portions of the state’s budget. Alternatively, given the state’s budgetary challenges, some pension officials said that if the state had not issued the POBs, it is more likely that it would have not paid its full required pension contributions. Use of POBs will be costly to Illinois, since the state will face annual debt service payments of about $1 billion over the next 9 years. However, the state increased individual and corporate taxes in 2010 and state budget officials told us the state plans to use the additional revenue to fund these debt service payments as well as other budgetary priorities. Whether the state’s statutorily required contributions are funded through POBs or general revenue does not directly affect the financial condition of the pension system. However, some pension officials were concerned that the debt service payments on the POBs would reduce available funding for future pension contributions. Illinois has also lowered employer contributions to the state’s pension systems in the short term by adjusting actuarial methods. In 2009, the state required its pension systems to switch from a market value (no smoothing) to a 5-year smoothing method for calculating actuarial assets and employer contributions. Plan officials explained that the change was intended to reduce the state’s contributions and dampen the impact of fiscal year 2009 market losses for the short term. As a result of the change, the state’s actuarially calculated contribution to the State Employees’ Retirement System of Illinois was reduced by $100 million in the first year, according to plan officials. However, plan actuaries noted that this strategy only defers contributions when plan assets experience a loss, as they did in fiscal year 2009. Future contributions will be higher than they would have been previously once the fiscal year 2009 market losses are fully recognized. In addition to the use of POBs and actuarial changes, Illinois also reduced benefits for new employees and applied the future savings to reduce employer contributions in the short term. For example, the state raised new employees’ normal retirement age to 67, capped final average salaries used for pension purposes, and reduced annual COLAs.According to plan officials, these changes are expected to reduce the State Employees’ Retirement System’s future liabilities by a third. State budget officials said the projected total estimated savings for the state over the next 35 years will be about $220 billion. Since the changes apply only to new employees, the savings will slowly accrue over the next 35 years. Nevertheless, the state took advanced credit for these future benefit reductions, further reducing contributions in the short term. According to plan actuaries, by taking this advance credit, the state also increased unfunded liabilities in the short term, adversely affecting its retirement systems. State and local governments continue to experience the lingering effects of investment losses and budget pressures in the wake of the recent economic downturn. Although most large state and local government pension plans still maintain substantial assets, sufficient to cover their pension obligations for a decade or more, heightened concerns over the long-term sustainability of the plans has spurred many states and localities to implement a variety of reforms, including reductions in benefits and increases in member contributions. Despite these efforts, continued vigilance is needed to help ensure that states and localities can continue to meet their pension obligations. Several factors will ultimately affect the sustainability of state and local pension plans over the long term. Important among them are whether government sponsors maintain adequate contributions toward these plans, and whether investment returns meet sponsors’ long-term assumptions. Going forward, growing budget pressures will continue to challenge state and local governments’ abilities to provide adequate contributions to help sustain their pension plans and ensure a secure retirement for current and future employees. We provided officials from the Internal Revenue Service and the Social Security Administration with a draft of this report. They provided technical comments that we incorporated, as appropriate. In addition, we provided officials from the states and cities we reviewed with portions of the draft report that addressed aspects of the pension funds in their jurisdictions. We incorporated their technical comments, as appropriate, as well. We are sending copies of this report to relevant congressional committees, the Commissioners of the Internal Revenue Service and the Social Security Administration, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions concerning this report, please contact Barbara D. Bovbjerg at (202) 512-7215 or Stanley J. Czerwinski at (202) 512- 6806. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. | Over 27 million employees and beneficiaries are covered by state and local government pension plans. However, the recent economic downturn and associated budget challenges confronting state and local governments pose some questions as to the sustainability of these plans, and what changes, if any, state and local governments are making to strengthen the financial condition of their pension plans. GAO was asked to examine (1) recent trends in the financial condition of state and local government pension plans and (2) strategies state and local governments are using to manage pension costs and the impacts of these strategies on plans, sponsors, employees, and retirees. To address these topics, GAO analyzed various measures of sector-wide financial condition based on national-level data on pension funding from the U.S. Census Bureau and others, and reviewed information on recent state legislative changes affecting government pensions from annual reports prepared by the National Conference of State Legislatures (NCSL). GAO did not assess the soundness of individual plans, but did obtain documents and conduct interviews with pension and budget officials in eight states and eight localities, selected to illustrate the range of strategies being implemented to meet current and future pension funding requirement. The Internal Revenue Service and Social Security Administration provided technical comments, which were incorporated, as appropriate. Despite the recent economic downturn, most large state and local government pension plans have assets sufficient to cover benefit payments to retirees for a decade or more. However, pension plans still face challenges over the long term due to the gap between assets and liabilities. In the past, some plan sponsors have not made adequate plan contributions or have granted unfunded benefit increases, and many suffered from investment losses during the economic downturn. The resulting gap between asset values and projected liabilities has led to steady increases in the actuarially required contribution levels needed to help sustain pension plans at the same time state and local governments face other fiscal pressures. Since 2008, the combination of fiscal pressures and increasing contribution requirements has spurred many states and localities to take action to strengthen the financial condition of their plans for the long term, often packaging multiple changes together. GAOs tabulation of recent state legislative changes reported by NCSL and review of reforms in selected sites revealed the following: Reducing benefits: 35 states have reduced pension benefits, mostly for future employees due to legal provisions protecting benefits for current employees and retirees. A few states, like Colorado, have reduced postretirement benefit increases for all members and beneficiaries of their pension plans. Increasing member contributions: Half of the states have increased member contributions, thereby shifting a larger share of pension costs to employees. Switching to a hybrid approach: Georgia, Michigan, and Utah recently implemented hybrid approaches, which incorporate a defined contribution plan component, shifting some investment risk to employees. At the same time, some states and localities have also adjusted their funding practices to help manage pension contribution requirements in the short term by changing actuarial methods, deferring contributions, or issuing bonds, actions that may increase future pension costs. Going forward, growing budget pressures will continue to challenge state and local governments abilities to provide adequate contributions to help sustain their pension plans. |
Before 1996, Medicare program integrity activities were subsumed under Medicare’s general administrative budget and performed, along with general claims processing functions, by insurance companies under contract with CMS, which led to certain problems. The level of funding available for program integrity activities was constrained, not only by the need to fund ongoing Medicare program operations—such as the costs for processing medical claims, but also by budget procedures imposed under the Budget Enforcement Act of 1990. In the early and mid-1990s, we reported that such funding constraints had reduced Medicare contractors’ ability to conduct audits and review medical claims. HHS advocated for a dedicated and stable amount of program integrity funding outside of the annual appropriations process, so that CMS and its contractors could plan and manage the function on a multiyear basis. HHS also asserted that past fluctuations in funding had made it difficult for contractors to retain experienced staff who understood the complexities of, and could protect, the financial integrity of Medicare program spending. Beginning in fiscal year 1997, HIPAA established MIP and provided CMS with dedicated funding to conduct program integrity activities. HIPAA stipulated a range of funds available for these activities from the Medicare trust funds each year. For example, for fiscal year 1997, the law stipulated that at least $430 million and not more than $440 million should be used. The maximum amount of MIP funds rose from $440 million in fiscal year 1997 to $720 million in fiscal year 2003. For fiscal year 2003, and every year thereafter, the maximum amount that HIPAA stipulated for MIP was $720 million. (See app. II, table 2, for additional information on the MIP funding ranges.) As a result of the increases stipulated in HIPAA, from fiscal years 1997 through 2005, total MIP expenditures increased about 63 percent—from about $438 million to $714 million, as figure 1 shows. HIPAA authorized MIP funds to be used to enter into contracts to “promote the integrity of the Medicare program.” The statute also listed the various program integrity activities to be conducted by contractors. CMS allocates MIP funds primarily to support its contractors’ program integrity efforts for the traditional Medicare program, known as fee-for- service Medicare. Among these contractors are fiscal intermediaries (intermediaries), carriers, PSCs, and Medicare administrative contractors (MAC). MACs are a new type of contractor that will replace all intermediaries and carriers by October 2011, as required by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA). MMA required CMS to conduct full and open competition to select MACs. CMS refers to this change as contracting reform. CMS has contracted with intermediaries, carriers, and MACs to perform two types of activities—claims processing and program integrity. Their claims processing activities include receiving and paying claims. These activities are classified as program management and are funded through a program management budget. In addition, intermediaries and carriers have been charged with conducting some program integrity activities under MIP, including performing medical review of claims. The four MACs selected in January 2006 will not conduct medical review activities. CMS plans to assign responsibility for medical review of claims to the MAC selected in July 2006 and to the other MAC contracts to be awarded in the future. MIP provides funds to support these program safeguard efforts. In addition, CMS uses MIP funds to support the activities of PSCs, which perform medical review of claims and identify and investigate potential fraud cases; a coordination of benefits (COB) contractor, which determines whether Medicare or other insurance has primary responsibility for paying a beneficiary’s health care costs; the National Supplier Clearinghouse (NSC), which screens and enrolls suppliers in the Medicare program; and the data analysis and coding (DAC) contractor, which maintains and analyzes Medicare claims data for durable medical equipment (DME), prosthetics, orthotics, and supplies. Contractors receive MIP funds to perform one or more of the following five program integrity activities: Audits involve the review of cost reports from institutions, such as hospitals, nursing homes, and home health agencies. Cost reports play a role in determining the amount of providers’ Medicare reimbursement. Medical review includes both automated and manual prepayment and postpayment reviews of Medicare claims and is intended to identify claims for noncovered or medically unnecessary services. The secondary payer activity seeks to identify primary sources of payment--such as employer-sponsored health insurance, automobile liability insurance, and workers’ compensation insurance--that should be paying claims mistakenly billed to Medicare. Secondary payer activities also include recouping Medicare payments made for claims not first identified as the responsibility of other insurers. Benefit integrity involves efforts to identify, investigate, and refer potential cases of fraud or abuse to law enforcement agencies that prosecute fraud cases. Provider education communicates information related to Medicare coverage policies, billing practices, and issues related to fraud and abuse both to providers identified as having submitted claims that were improper, and to the general provider population. CMS also uses MIP to fund support for the five activities, such as certain information technology systems, fees for consultants, storage of CMS records, and postage and printing. The agency allocates the cost of this support to the five activities, depending on which of the activities is receiving support. Table 1 provides information on specific MIP activities performed by the contractors. Appendix III provides examples of key tasks performed by each of these contractors. For fiscal years 1997 through 2005, CMS generally increased the amount of funding for each of its five program integrity activities, but the amount of the funding provided and the percentage increase have varied among the activities. Provider education received the largest percentage increase in funds, while audit and medical review received the largest amount of funds overall. (See fig. 2.) CMS increased its allocation for provider education by about 590 percent from fiscal year 1997 through fiscal year 2005. This increase was due, in part, to CMS’s decision in fiscal year 2002 to use MIP funds for outreach activities to groups of like providers, which had not previously been funded through MIP. CMS will be able to further increase expenditures for program integrity in fiscal year 2006. In addition to the maximum of $720 million originally appropriated under HIPAA for fiscal year 2006, DRA increased the maximum by an additional $112 million, for a total of $832 million. CMS plans to use some of the $112 million to address potential fraud, waste, and abuse in the new Medicare prescription drug benefit. In each year from fiscal year 1997 through fiscal year 2005, CMS generally increased the amount of MIP funds spent for each of its five program integrity activities, as figure 2 shows. In addition to the increase in the amount of funding for provider education, the expenditures for audit increased 45 percent during the same period. As figure 3 shows, expenditures for medical review increased from fiscal year 1997 to fiscal year 2001 to almost $215 million—about 81 percent—and, since fiscal year 2001, decreased to about $166 million, or about 23 percent. Overall, expenditures for medical review increased 40 percent from fiscal year 1997 to fiscal year 2005. During this period, expenditures for secondary payer increased 49 percent, and for benefit integrity, expenditures increased 89 percent. (See fig. 3 for the amount of expenditures by activity in fiscal years 1997, 2001, and 2005 and app. II, table 3, for more detailed information on the amount of expenditures for each activity in each year.) Increased spending for provider education stemmed, in part, from provider concerns about an increased burden on them in the medical review process. In 2001, we reported that as CMS increasingly focused on ensuring program integrity, providers were concerned about what they considered to be inappropriate targeting of their claims for review. Further, providers asserted that they may have billed incorrectly because of their confusion about Medicare’s program rules. To address these concerns, CMS developed a more data-driven approach for conducting medical review and also increased its emphasis on provider education. CMS officials explained that medical review would help identify providers that were billing inappropriately, and provider education would focus on individuals’ specific billing errors to eliminate or prevent recurrence of the problems. In addition, beginning in fiscal year 2002, spending for the provider education activity increased significantly because CMS began to use MIP funds for what the agency called provider outreach. Provider outreach focuses on communicating with groups of providers about Medicare policies, initiatives, and significant programmatic changes that could affect their billing. This information is conveyed through seminars, workshops, articles, and Web site publications. Previously, provider outreach had been funded outside of MIP, as part of CMS’s program management budget. Provider education spending increased from $17 million in fiscal year 2001—before provider outreach was added to the provider education activity—to $53.5 million in fiscal year 2002. In fiscal year 2005, funding for the provider education activity reached $70 million. In comparing the share of funds spent on each program integrity activity, from fiscal year 1997 through fiscal year 2005, we found that CMS generally spent the largest share on audit, averaging about 31 percent, and on medical review, averaging about 27 percent. CMS spent less on secondary payer, averaging 21 percent, and benefit integrity, averaging 15 percent. In contrast, during this period, CMS spent the smallest percentage on provider education, which averaged about 6 percent of MIP expenditures. See figure 4 for information on the percentage of funds allocated to each activity. (For more detail, see table 4 in app. II.) CMS officials told us that they generally had allocated MIP funds to the five activities based predominantly on historical funding, but sometimes considered high-level priorities. However, this approach does not take into account data or information on the effectiveness of one activity over the other in ensuring the integrity of Medicare or allow CMS to determine if activities are yielding benefits that are commensurate with the amounts spent. For example, while CMS has noted that benefit integrity and provider education activities have intangible value, the agency has not routinely collected information to evaluate their comparative effectiveness. Furthermore, CMS has not fully assessed whether MIP funds are appropriately allocated within the audit, medical review, benefit integrity, and provider education activities. For example, audit’s role has changed as Medicare’s payment methods have changed in the last decade, but it continues to have the largest share of MIP funding. According to agency officials, CMS allocates funds for the five activities based primarily on an analysis of previous years’ spending and may also consider other information when developing the MIP budget, such as current expenditures by individual contractors. CMS officials told us that they may also consider the agency’s high-level priorities. For example, in fiscal year 2004, CMS began to increase funds to expand the scope of its annual study to estimate Medicare improper payment rates, and in fiscal year 2002, it increased its MIP allocation for provider education. CMS does not have a means to compare quantitative data or qualitative information on the relative effectiveness of MIP activities that it could use in allocating funds. Instead, it calculates the quantitative benefits for two, and assesses the qualitative benefits—which are not objectively measured—for the other three. In fiscal year 2005, for its medical review and secondary payer activities, CMS tracked dollars saved in relation to dollars spent—a quantitative measure that the agency calls a return on investment (ROI). Having an ROI figure is useful because it measures the effectiveness of an individual activity so that its value can be compared with that of another activity. As of fiscal year 2005, secondary payer had an ROI of $37 for every dollar spent on the activity, and medical review had an ROI of $21 for every dollar spent. CMS tracked the ROI for audit, but by fiscal year 2002, audit’s reported contribution to ROI fell to almost zero. (See fig. 5 and app. II, table 5, for additional ROI details.) CMS officials told us that the decrease in the ROI for audit was due to the implementation of prospective payment systems (PPS), under which Medicare pays institutional providers fixed, predetermined amounts that vary according to patients’ need for care. Until fiscal year 2001, audits had achieved an ROI that was generally $9 or more for every dollar spent conducting them, by disallowing payment for individual costs that should not have been paid by Medicare under the previous payment method. Under PPS, CMS’s methods for paying providers changed. However, the information system that had been used to track ROI began to incorrectly calculate the savings from audit because it had not been adjusted for the new payment method. According to agency officials, CMS is implementing a different way to track audit savings, and an overall ROI. It will focus on the savings from disallowing items that directly affect an individual provider’s payment under a PPS, such as bad debts and the number of low- income patients hospitals serve. It will track the amounts related to these add-on payments actually paid by Medicare to, or recouped from, the provider after an audit. The difference between the amount paid prior to the audit and the amount paid after the audit (assuming there has been an adjustment) would be the savings. However, all audit functions do not result in measurable savings. For example, in its written comments on a draft of this report, CMS noted that many audit functions funded by MIP do not have an ROI. CMS stated that these include processing cost reports for data collection purposes, correcting omissions on providers’ cost reports, implementing court decisions, and issuing notifications concerning Medicare payments. In addition, CMS stated that some of these activities are mandated by law, while others have significant value to the Medicare Payment Advisory Commission (MedPAC), which is an independent federal commission; providers; provider associations; and actuaries. From fiscal year 1997 through fiscal year 2005, CMS developed qualitative assessments of the impact of benefit integrity and provider education. According to CMS, the agency develops such assessments when the savings generated by MIP activities are impossible or difficult to identify. Nevertheless, CMS officials told us that these activities provide value to the program in helping to ensure proper Medicare payments. For example, CMS officials said that benefit integrity contributes to the work of federal law enforcement agencies, which investigate and prosecute Medicare fraud and abuse. CMS officials also noted that they consider benefit integrity to have a sentinel effect in discouraging entities that may be considering defrauding the Medicare program, but this effect is impossible to measure. CMS indicated that trying to measure the results of the contractors’ benefit integrity activities could create incentives that undermine the value of their work. For example, counting the number of cases referred to law enforcement for further investigation could lead the contractors to refer more cases that were less fully developed. However, other agencies that investigate or prosecute fraud, such as HHS and the Department of Justice, keep track of their successful cases, recoveries, and fines to demonstrate their results. Similarly, CMS could assess the degree to which each of its contractors had contributed to HHS and the Department of Justice’s successful investigations and prosecutions. In regard to educating providers on appropriate billing practices, CMS may be missing opportunities to evaluate its contractors’ performance. Provider education can help reduce billing errors, according to CMS. However, according to an OIG report, CMS has not evaluated the strategies used to modify the behavior of providers through education to determine if these strategies are achieving desired results. CMS has noted the intangible value inherent in benefit integrity and provider education activities, but the agency has not routinely collected information to evaluate their comparative effectiveness in ensuring program integrity. Further, as discussed earlier, correct information on audit’s effectiveness, based on an ROI, has not been available for the last several years. Consequently, CMS is not able to determine if some of the funds spent for benefit integrity, provider education, and audit—about $396 million, or 56 percent of MIP funds in fiscal year 2005—could be better directed to secondary payer or medical review. Nevertheless, CMS officials told us that they plan to decrease the allocation to medical review and increase the allocation to provider education. CMS officials stated that they are developing two initiatives that will give the agency objective measures of the results of the audit and provider education activities. As discussed earlier, CMS is implementing a revised methodology for calculating the ROI for audit. In addition, it is trying to develop information on the effectiveness of provider education. A CMS official explained that the agency is adding a provider education component to its program integrity management reporting system. This component will potentially allow CMS to develop an ROI figure for provider education by correlating educational efforts to a decrease in claim denials and provide a measure of the quantitative benefits of this activity. This component is scheduled to begin operating in the summer of 2006. After CMS has allocated funds to each of the five MIP activities, it must decide how to further distribute those funds to pay contractors that carry out each one. For example, in fiscal year 2004, after CMS allocated about $135 million for medical review to be conducted by intermediaries and carriers, it then distributed those funds to pay 28 intermediaries and 24 carriers that were conducting medical review at that time. However, given vulnerabilities for improper payment, contractor workload, and the relative effectiveness of activities performed, CMS has not always taken steps to ensure that it has allocated funds in an optimal way within its activities. Nevertheless, CMS has used information on relative savings to decide on funding allocations within the secondary payer activity. Medical review, provider education, and benefit integrity are activities for which allocation of MIP funds may not be optimal, because our analysis suggests that CMS has not allocated funds within these activities based on information concerning contractor vulnerabilities. Such vulnerabilities include the potential for fraudulent billing in different locations and the amount of potential benefit payments at risk in the contractor’s jurisdiction. For example, CMS estimated that the contractor that handled claims for DME, orthotics, prosthetics, and supplies in a jurisdiction that included Texas and Florida—two states experiencing high levels of fraudulent Medicare billing—improperly paid 11.5 percent of its 2004 claims—or $474.9 million—which was a higher improper payment rate than that of other contractors paying these types of claims. As we previously reported, our analysis indicated this contractor received almost a third less funds for medical review per $100 in submitted claims in fiscal year 2003 than the amount given to contractors in other regions with less risk of fraudulent billing. Our most recent analysis indicated that the imbalance in fund allocation did not change in fiscal years 2004 and 2005. We could not determine the rationale for this allocation beyond what was historically budgeted for this contractor. The amount of medical review funds allocated to individual contractors is not directly tied to the amount of benefits that they pay, which is a key measure of potential risk. For example, in fiscal year 2004, one contractor paid out $66 million in benefits and received about 28 cents in medical review funds for each $100 in benefits paid. In contrast, another contractor paid out considerably more in benefits—about $5 billion in fiscal year 2004—and received about 7 cents in medical review funds for each $100 in benefits paid. Further, CMS has not adjusted the amount of funding for individual contractors to educate providers based on their relative risks. A CMS official told us that the amount of provider education funding is generally aligned with the amount allocated for medical review, regardless of the value of the benefits that the contractor pays. Similarly, the amount of MIP funds provided to PSCs is not directly tied to the amount of benefits paid in jurisdictions for which they have responsibility for benefit integrity. For example, CMS spent about $75 million for work performed by PSCs under 13 benefit integrity task orders. The PSCs averaged about 3 cents for each $100 in paid claims in the jurisdictions for which they conducted benefit integrity tasks. However, the amount of MIP funding paid to the PSCs to conduct benefit integrity activities varied from about 1 cent to about 7 cents for each $100 in claims paid. Further, our analysis showed no clear relationship between funds provided to PSCs and their responsibilities for conducting benefit integrity activities in jurisdictions with high incidences of fraudulent Medicare billing. For example, one PSC received about 4 cents for conducting benefit integrity work for each $100 in paid claims for benefit integrity work in a jurisdiction that included Florida, which is at high risk for fraudulent billing. In contrast, PSCs received the same level of funding to conduct benefit integrity work in states at lower risk for fraudulent billing, including Iowa, Montana, Pennsylvania, and Wyoming. During the last decade, Medicare has significantly changed how it pays institutional providers—such as hospitals and nursing homes—that it audits. To align with the payment method changes, CMS has modified its audit focus to items in the cost report that can affect payments under a PPS. However, these audits can affect a much smaller proportion of Medicare’s payments under a PPS than audits of costs under the previous payment method. Given the magnitude of the payment method change, CMS has not evaluated whether funds within the audit activity should be further reallocated to potentially generate greater savings to the Medicare program by addressing the accuracy of reported costs that may be used to determine payment increases. CMS distributes funds to its contractors to conduct certain tasks, such as inputting data from; reviewing; and, if needed, auditing cost reports submitted by its institutional providers in order to settle, or agree upon, the reported costs. CMS’s audit contractors are also required to conduct wage index reviews and assist with intermediary hearings and appeals of settled cost reports. For several years, CMS has had a backlog of cost reports to settle, and the agency has made a priority of reducing the backlog. Other priorities include more closely scrutinizing those providers that are still paid based on their costs—such as critical access hospitals— and conducting required audits. For providers paid under a PPS, CMS has shifted its audit focus to the few items that could affect a provider’s payments if disallowed. These include bad debt, payments for graduate medical training, and the number of low- income patients that hospitals serve. CMS has also shifted more audit resources to hospitals because more items on their cost reports can affect calculations of a provider’s add-on payments. CMS does not know the amount of MIP funds that are associated with audits of different types of providers or specific issues, such as bad debt. However, in fiscal year 2004, CMS began to separately track some audit costs, such as those for desk reviews, audits, and wage index reviews. This provided some information on how audit funds were being spent. According to CMS officials, tracking the costs of individual audits at a provider or issue level would be difficult and costly because multiple issues are audited at the same time and the complexity of individual audits varies for the same provider type. Nevertheless, more detailed information on audit costs—such as at the provider level—than CMS currently tracks could provide it with a better understanding of the value of its current mix of tasks, particularly if it could associate the costs with the savings from the audits. This could provide CMS with information on whether it needs to change the balance of funding for those tasks—for example, whether it should focus more attention on bad debt or other areas of the cost report for specific types of providers. Further, CMS’s audit function continues to focus on verifying specific aspects of the provider’s cost report that affect its individual payment. This type of audit generally addresses a small portion of providers’ Medicare payments, while under a PPS, a much greater portion of the payments are based on overall industry costs. Each year, MedPAC advises the Congress on whether the Medicare PPS rates for institutional providers should increase, decrease, or remain constant. However, MedPAC generally does not have a set of audited cost reports that validate the information it uses in its assessments of providers, such as hospitals’ allocations of their costs. According to MedPAC, the current audit process reveals little about the accuracy of the Medicare cost information. For example, while CMS audits individual providers through full or partial audits, it does not allocate funds to audit a panel of providers, such as hospitals, which could provide a means to highlight areas where cost reporting accuracy is problematic. Without accurate information, CMS cannot ensure that payments to hospitals properly reflect their costs and provide reliable information that can be a factor in determining whether rates should change or remain constant. CMS might find it cost-effective to gather additional information because audits have the potential to give the Congress better information on hospitals’ costs. For example, by law, CMS is required to periodically conduct audits of end-stage renal disease (ESRD) facilities, which care for patients who must rely on dialysis treatments to compensate for kidney failure. CMS broadened its audit plan for these facilities to include a review not only of bad debts, but also to validate the costs of a selected number of items that are paid through PPS. CMS officials indicated that their audits of these facilities generated only limited savings, usually related to bad debts, so they did not consider these audits very valuable. However, as a result of these audits, MedPAC officials stated in 2005 that these facilities had a greater margin—or ratio of Medicare payments to costs—than their cost reports suggested. This information was factored into MedPAC’s recommendation about the amount of payment increase needed in calendar year 2007. Setting appropriate payment increases for hospitals is potentially more important to Medicare than for ESRD facilities because payments to participating inpatient hospitals represented about $116 billion, or about 40 percent of Medicare’s benefit payments in fiscal year 2004. CMS officials agreed that gathering this information might be valuable, but indicated that they did not currently have sufficient funding to conduct this data validation in addition to their current efforts funded as part of audit. In contrast to provider education and audit, CMS collects information on the relative savings from specific secondary payer functions and has used this information to decide on funding allocations within the secondary payer activity. CMS allocates funds to, and calculates savings for, about 16 secondary payer functions. Among these functions are (1) a data match that helps identify instances when a Medicare beneficiary was covered by other insurance and (2) the initial enrollment questionnaire, which gathers insurance information on beneficiaries before they become eligible for Medicare. Within secondary payer, for fiscal year 2005, savings for the 16 functions ranged from less than 1 percent to 49 percent of savings of over $5 billion for all of the functions. CMS officials told us that they have used relative savings information for secondary payer functions as one factor in determining whether to increase, decrease, or terminate funding for the functions within this activity. For example, according to CMS officials, in fiscal year 2005, savings for one secondary payer function—voluntary reporting of primary payer information to CMS by health insurance companies—increased by about 65 percent over fiscal year 2004. Further, savings from this effort continue to increase. CMS is planning to maintain or expand funding to it. However, CMS officials said that after confirming their relatively low savings, they had terminated certain other efforts to identify secondary payer claims. The terminated efforts included (1) a second questionnaire sent as follow-up to determine whether a beneficiary who is claiming Medicare benefits for the first time has other health insurance that would be responsible for paying the claim and (2) an effort to determine whether certain trauma codes contained in a claim could indicate that another insurer, such as worker’s compensation, could be the primary payer. The Medicare program is undergoing significant changes for which there is little precedent. These include the addition of the new Part D prescription drug benefit and the reform of Medicare contracting. Both will require CMS to make new choices in how it should allocate its MIP funds to best address its program integrity challenges. CMS’s current allocation approach—which agency officials characterized as primarily relying on previous fiscal year funding allocations for each activity, and to each contractor, to determine current allocations—will not be adequate to address emerging program integrity risks and ongoing programmatic changes. In addition, as contracting reform proceeds, CMS intends to increase its use of MIP funds to reward contractors to encourage superior performance. However, the usefulness of award payments as a tool to encourage contractors to perform MIP tasks effectively depends on how well CMS can develop, and consistently apply, performance measures to gauge differences in the quality of performance. CMS’s current allocation approach will not be adequate to address Medicare’s emerging program integrity risks related to the prescription drug benefit. Over the next 10 years, total expenditures for the prescription drug benefit, which was implemented in January 2006, are projected to be about $978 billion, while total expenditures for the Medicare program are projected to be about $6.1 trillion. CMS and others have stated that the prescription drug benefit is at risk for significant fraud and abuse. In December 2005, an assistant U.S. attorney noted that the Medicare prescription drug benefit would be vulnerable to a host of fraud and abuse schemes unless better detection systems are developed. According to CMS, the prescription drug benefit may be vulnerable to fraud and abuse in particular areas, including beneficiary eligibility, fraud by pharmacies, and kickbacks designed to encourage certain drugs to be included by the plans administering the benefit. To respond to these challenges, CMS has selected eight private organizations, called Medicare prescription drug integrity contractors (MEDIC), to support CMS’s benefit integrity and audit efforts. Because the Medicare prescription drug benefit is in the early stages of implementation, CMS does not yet have data to estimate the level of improper payments or information to determine the level of program integrity funds needed to address emerging vulnerabilities. As a result, it is not clear whether, in the future, CMS will need to shift funds from program integrity activities for Parts A and B to protect the Part D drug benefit from potential fraud and abuse. For fiscal year 2006, $112 million beyond the HIPAA limit of $720 million has been appropriated for CMS to support program integrity activities. The President’s Budget for fiscal year 2007 has also proposed additional funds for fiscal year 2007 and fiscal year 2008. CMS plans to use some of the additional funding provided under DRA for fiscal year 2006 to support Part D program integrity efforts. For example, CMS plans to spend $14 million over the next fiscal year to fund efforts by MEDICs to protect the prescription drug benefit by performing selected tasks, such as analyzing data to identify instances of potential fraud and abuse. In addition, CMS plans to spend about $33 million on Part D information technology systems to track data related to beneficiary eligibility and to collect, maintain, and process information on Medicare covered and noncovered drugs for Medicare beneficiaries participating in Part D. See appendix IV for more information. Another significant programmatic change that will affect future MIP funding allocations is Medicare contracting reform. MMA required CMS to transfer all claims administration work, which includes selected program integrity activities, to MACs by October 2011. CMS plans to transfer all work to the MACs by July 2009—about 2 years ahead of MMA’s specified time frame. Contracting reform will affect MIP funding allocations because of (1) changes in contractors’ responsibilities for program integrity activities and their jurisdictions, (2) the potential for operational efficiencies, and (3) increasing use of MIP funds for contractor award payments. The transition to MACs will change some contractors’ program integrity responsibilities and require reallocation of MIP funds among them. The new MACs will be responsible for paying claims that were previously processed by intermediaries and carriers, but CMS has decided that MACs will not be performing all of the MIP activities that they previously conducted. For example, PSCs performed medical reviews of claims in some contractors’ jurisdictions, but this activity will be performed by almost all of the MACs in the future. Further, contractors’ jurisdictions will change as 23 MACs assume the work previously performed by a total of 51 Medicare intermediaries and carriers, within the confines of 15 newly designated geographic jurisdictions. The PSCs conducting benefit integrity work will be aligned with the MACs in the 15 jurisdictions. In some cases, one PSC may be aligned with more than one MAC jurisdiction. According to CMS officials, Medicare contracting reform will lead to operational efficiencies and savings that would mostly be due to more effective medical review. For example, CMS anticipates that greater incentives for MACs to operate efficiently and adopt industry innovations in the automated medical review of claims will result in total estimated trust fund savings of $650 million for Medicare from fiscal year 2006 to fiscal year 2011. Having program integrity activities operate more effectively could give CMS additional flexibility to reallocate some funding while achieving reductions in improperly paid claims. However, we have not validated CMS’s estimate, and in our August 2005 report on CMS’s plan for implementing Medicare contracting reform, we raised concerns about the uncertainty of savings estimates, which were based on future developments that are difficult to predict. As part of contracting reform, CMS plans to increase its allocation of MIP funds that are used as award payments to encourage superior performance of program integrity activities by contractors. Award payments that are tied to appropriate performance measures could encourage contractors to conduct MIP activities effectively and introduce innovations, such as developing new analytical approaches to enhance the medical review process. Intermediaries and carriers, both of which conduct some program integrity activities, are currently paid on the basis of their costs, generally without financial incentives to encourage superior performance. In contrast, CMS currently offers award payments to other types of contractors that conduct program integrity activities, including four MACs that were selected in January 2006, PSCs, the COB contractor, NSC, and the DAC contractor. As early as 2009, or when all administrative work has been transferred to MACs, CMS will be offering the opportunity to be selected for award payments to all contractors that conduct program integrity activities. The usefulness of using MIP funding for award payments to encourage contractors to conduct program integrity tasks effectively depends on how well CMS can develop, and consistently apply, performance measures to gauge differences in the quality of performance. In 2004, CMS conducted a study to evaluate whether the agency could reduce improper payments by using award payments for contractors to lower their paid claims error rates, which represent the amount of claims contractors paid in error compared with their total fee-for-service payments. According to CMS, the outcome of that pilot was positive, and CMS plans to use award payments in the future as part of its strategy for reducing improper payments. However, as we reported in March 2006, CMS will need to refine its measure of contractor-specific improper payments, which would enhance its ability to evaluate their performance of medical review and provider education activities. Further, even when CMS has developed measures to assess the performance of contractors that conduct MIP activities, it has not always effectively or consistently applied them. For example, the OIG recently reviewed the extent and type of information provided in evaluation reports on PSCs’ performance in detecting and deterring fraud and abuse. The OIG found that although the evaluation reports were used as a basis to assess contractors’ overall performance, they did not consistently include quantitative information on the activities contractors performed or their effectiveness. We designated the Medicare program as high risk for fraud, waste, abuse, and mismanagement in 1990, and the program remains so today. To address this ongoing risk and reduce the program’s billions of dollars in improper payments, CMS must use Medicare’s program integrity funding as effectively as possible. Further, Medicare’s susceptibility to fraud is growing, as it addresses the challenges of adding a prescription drug benefit to the program. Despite Medicare’s increasing vulnerability, CMS has generally not changed its allocation approach for MIP funding. In 2006, a decade after MIP was established to support Medicare program integrity activities, CMS officials state that the primary basis for their allocation of funds is how they have been allocated in the past. However, programmatic changes for Medicare’s contractors and emerging risks for the Part D prescription drug benefit suggest that CMS needs to modify its approach for deciding on funding allocations for—and within—the five program integrity activities. Also supporting the need for CMS to assess its current allocation approach is that the agency’s funding decisions do not routinely take into account quantitative data or qualitative information on the relative effectiveness of its five program integrity activities or contractors’ vulnerabilities. Without considering information or data, CMS cannot judge whether funds are being spent as effectively as possible or if they should be reallocated. CMS is developing two new measures that may help the agency evaluate the relative effectiveness of provider education and the audit activity. Better information about MIP activities’ effectiveness should assist CMS in making more prudent management and funding allocation decisions. To better ensure that MIP funds are appropriately allocated among and within the five program integrity activities, we recommend that CMS develop a method of allocating funds based on the effectiveness of its program integrity activities, the contractors’ workloads, and risk. In its written comments on a draft of this report, CMS stated that it generally agreed with our recommendation to develop a method of allocating MIP funds based on the effectiveness of the agency’s program integrity activities, Medicare contractors’ workloads, and risk. However, the agency expressed concern that the report appeared to emphasize the use of ROI, a quantitative measure that tracks dollars saved in relation to dollars spent, as a way to allocate funds. CMS stated this quantitative measure can be an indicator of effectiveness, but noted that such a measure cannot serve as the sole basis for informing funding decisions. The agency stated that some of its MIP activities had benefits that could not be easily quantified. CMS agreed on the value of allocating funds based on risk and provided information on programmatic changes that would help it do so. The agency also noted the efforts it had recently made to strengthen program integrity. CMS expressed concern about our discussion in the draft report concerning the use of ROI as a way to quantitatively measure effectiveness and to allocate MIP funds. CMS stated that the agency cannot provide funding based exclusively on an ROI because some activities, including benefit integrity, do not lend themselves to an ROI measurement and others, such as audit, are governed by statutory requirements. CMS also stated that in allocating MIP funds, it is critical that it consider factors other than ROI, including historical funding, because MIP funding has not increased since 2003. Our report indicates that an ROI is an important factor that should be considered in allocating funds, but cannot be the sole consideration. Our conclusions reflect our support of an approach that takes into account the qualitative benefits of program integrity activities. Our report discusses agency officials’ views on the difficulty of developing quantitative measures for the benefit integrity activity. We also provide information on CMS officials’ qualitative assessments of the positive impact of benefit integrity and provider education. For example, our report notes that according to CMS officials, these benefits include discouraging entities that may be considering defrauding the Medicare program and helping to ensure proper Medicare payments. Both quantitative and qualitative assessments of effectiveness—to the extent they can be developed—could help CMS determine whether MIP funds are being wisely invested or if they should be reallocated. CMS also commented on the allocation of MIP funds to Medicare contractors based on workload and risk. CMS noted that contracting reform and the introduction of MACs will result in contractors’ workloads being more evenly distributed. In addition, CMS noted that it is developing award fee measures for contractors’ medical review activities, including establishing performance goals for the Comprehensive Error Rate Testing program contractor-specific error rate. CMS agreed with us that risk is a factor that should be considered in allocating funds. CMS stated that it is committed to identifying and investigating better approaches to allocate resources to support critical agency functions, including using its new contracting authority to introduce incentives for Medicare fee-for-service claims processing contracts and consolidating Medicare secondary payer activities. CMS also noted that it is using state- of-the-art systems and expertise to aggressively fight waste and abuse in the program, continues to work closely with its contractors to help ensure that providers receive appropriate education and guidance in areas where billing problems have been identified, and has expanded oversight of the new Medicare Part D prescription drug benefit. In addition, CMS discussed recent program integrity efforts and successes, including reducing the number of improper fee-for-service Medicare payments and addressing fraud across all provider types by coordinating the activities of CMS, law enforcement, and Medicare contactors in Los Angeles, California, and Miami, Florida. We have reprinted CMS’s letter in appendix V. CMS also provided us with technical comments, which we incorporated in the report where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the Secretary of HHS, the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (312) 220-7600 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are Sheila K. Avruch, Assistant Director; Hazel Bailey; Krister Friday; Sandra D. Gove; and Craig Winslow. To provide information on the amount of funds allocated to the five Medicare Integrity Program (MIP) activities over time, we interviewed officials from the Centers for Medicare & Medicaid Services (CMS). We obtained information concerning MIP funding allocations for audit, medical review, secondary payer, benefit integrity, and provider education for fiscal years 1997 through 2005. We also analyzed allocations within these activities. Further, we obtained and analyzed related financial information, including CMS’s planned and actual expenditures, savings, and return on investment (ROI) calculations for fiscal year 1997 through fiscal year 2005; CMS financial reports; and presidential and Department of Health and Human Service (HHS) budget proposals for fiscal years 2006 and 2007. Because most MIP expenditures are for activities related to the Medicare fee-for-service plan, our analyses focused on those expenditures. We reviewed relevant legislation, such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA); the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA); and the Deficit Reduction Act of 2005 (DRA). We reviewed pertinent reports and congressional testimony, including our own and those of CMS and the HHS Office of Inspector General (OIG), related to program integrity requirements. To examine the approach that CMS uses to allocate MIP funds, we interviewed CMS officials regarding factors they consider when allocating MIP funds. We reviewed related documentation provided to us by CMS, including budget development guidelines; manuals, such as the Financial Management Manual; operating plans; and selected workload data. We also reviewed information on individual projects, such as information technology systems. We also reviewed pertinent GAO reports and testimony and Medicare Payment Advisory Commission reports. We did not independently examine the internal and automated data processing controls for CMS systems from which we obtained data used in our analyses. CMS subjects its data to limited reviews and periodic examinations and relies on the data obtained from these systems as evidence of Medicare expenditures and to support CMS’s management and budgetary decisions. Therefore, we considered these data to be reliable for the purposes of our review. In addition, we interviewed CMS officials regarding changes in the Medicare program that may affect MIP funding allocations, including CMS’s plans to support activities to detect fraud and improper billing for the new Part D prescription drug benefit and MIP activities to be performed by contractors in the future. We also interviewed CMS officials concerning performance measures and evaluations of contractors. We reviewed related documentation, including the statement of work for the Medicare prescription drug integrity contractors; plans for Medicare contracting reform; policies and procedures associated with CMS’s measurement of contractor performance; standards and performance measures, such as the Comprehensive Error Rate Testing program; various manuals, including the Medicare Program Integrity Manual; and an OIG report on performance evaluations of program safeguard contractors (PSC). We also reviewed CMS’s evaluations of contractor performance. We performed our work from August 2005 through August 2006 in accordance with generally accepted government auditing standards. The following tables contain details on MIP funding, expenditures, allocations, and ROI. Table 2 shows MIP funding ranges under HIPAA. Table 3 shows the amounts of MIP expenditures allocated to each of the program integrity activities. Table 4 shows the percentage of MIP funds allocated to the program integrity activities. Table 5 shows the ROI for three of the program integrity activities. Hospitals, nursing homes, home health agencies, and other institutional providers that are—or have been— paid on a cost reimbursement basis submit cost reports to CMS. Cost reports provide a detailed accounting of what costs have been incurred, what costs the provider is charging to the Medicare program, and how such costs are accounted for by the provider. Contractors review all or part of the cost report to assess whether costs have been properly allocated and charged to the Medicare program. Contractors determine if the cost report is acceptable or if it needs further review. In some instances, contractors may conduct on-site cost report audits, which include the review of financial records and related documentation supporting costs and charges. Contractors identify billing errors made by providers through analysis of claims data; take action to prevent errors, address identified errors, or both; and publish local coverage policies to provide guidance to the public and medical community concerning items and services that are eligible for Medicare payment. Most medical reviews do not require a manual review of medical records. Often contactors conduct medical reviews simply by examining the claim itself, usually using automated methods. Coordination of benefits (COB) contractor, intermediaries and carriers, and Medicare administrative contractors (MAC) The COB contractor collects, manages, and maintains information regarding health insurance coverage for Medicare beneficiaries. To gather information to properly adjudicate submitted claims, the COB contractor sends questionnaires to newly enrolled Medicare beneficiaries and employers to solicit information about beneficiaries’ health insurance coverage. The COB contractor also collects secondary payer data from providers, insurers, attorneys, and some state agencies. The COB contractor uses data match programs to identify claims that should have been paid by another insurer. When information indicates that a beneficiary has other health insurance, the COB contractor initiates a secondary payer claims investigation. Intermediaries and carriers also conduct secondary payer operations, including prepayment activities in conjunction with the COB contractor, and they recover erroneous secondary payer payments. Contractors are tasked with preventing, detecting, and deterring Medicare fraud. PSCs conduct medical reviews to support fraud investigations, analyze data to support medical reviews, process fraud complaints, develop fraud cases, conduct provider education related to fraud activities, and support law enforcement entities. Once a case is developed, PSCs refer it to the OIG or to law enforcement for prosecution. NSC reviews and processes applications from organizations and individuals seeking to become suppliers of medical equipment and supplies in the Medicare program. NSC verifies suppliers’ application information; conducts on-site visits to the prospective suppliers; issues supplier authorization numbers, which allow suppliers to bill Medicare; and maintains a central data repository of information concerning suppliers. NSC also periodically reenrolls active suppliers and uses data to assist with fraud and abuse research. The DAC contractor conducts ongoing data analysis and reporting of trends related to supplier billing for medical equipment and supplies and provides ongoing feedback to the PSCs. When billing problems are identified through medical reviews, contractors take a variety of steps to educate providers about Medicare coverage policies, billing practices, and issues related to fraud and abuse. Contractors may conduct group training sessions, including seminars and workshops; send informational letters to providers; arrange for teleconferences; conduct site visits; and provide information on their Web sites. For fiscal year 2006, DRA provided $112 million in MIP funds beyond the annual HIPAA limit of $720 million. Of this amount, DRA specified that $12 million was for the Medi-Medi program and $100 million was for MIP in general. Table 6 provides information on CMS’s planned spending of $100 million in general MIP funds provided by DRA, including spending related to the Part D prescription drug benefit. | Since 1990, GAO has considered Medicare at high risk for fraud, waste, abuse, and mismanagement. The Medicare Integrity Program (MIP) provides funds to the Centers for Medicare & Medicaid Services (CMS--the agency that administers Medicare--to safeguard over $300 billion in program payments made on behalf of its beneficiaries. CMS conducts five program integrity activities: audits; medical reviews of claims; determinations of whether Medicare or other insurance sources have primary responsibility for payment, called secondary payer; benefit integrity to address potential fraud cases; and provider education. In this report, GAO determined (1) the amount of MIP funds that CMS has allocated to the five program integrity activities over time, (2) the approach that CMS uses to allocate MIP funds, and (3) how major changes in the Medicare program may affect MIP funding allocations. For fiscal years 1997 through 2005, CMS's MIP expenditures generally increased for each of the five program integrity activities, but the amount of the increase differed by activity. Since fiscal year 1997, provider education has had the largest percentage increase in funding--about 590 percent, while audit and medical review had the largest amounts of funding allocated. In fiscal year 2006, funding for MIP will increase further to $832 million, which includes $112 million in funds that CMS plans to use, in part, to address potential fraud and abuse in the new Medicare prescription drug benefit. CMS officials told us that they have allocated MIP funds to the five program integrity activities based primarily on past allocation levels. Although CMS has quantitative measures of effectiveness for two of its activities--the savings that medical review and secondary payer generate compared to their costs--it does not have a means to determine the effectiveness of each of the five activities relative to the others to aid it in allocating funds. Further, CMS has generally not assessed whether MIP funds are distributed to the contractors conducting each program integrity activity to provide the greatest benefit to Medicare. Because of significant programmatic changes, such as the implementation of the Medicare prescription drug benefit and competitive selection of contractors responsible for claims administration and program integrity activities, the agency's current approach will not be adequate for making future allocation decisions. For example, CMS will need to allocate funds for program integrity activities to address emerging vulnerabilities that could affect the Medicare prescription drug benefit. Further, through contracting reform, CMS will task new contractors with performing a different mix of program integrity activities. However, the agency's funding approach is not geared to target MIP resources to the activities with the greatest impact on the program and to ensure that the contractors have funding commensurate with their relative workloads and risk of making improper payments. |
Under DOD’s supply chain materiel management policy, the secondary item inventory is to be sized to minimize DOD’s investment while providing the inventory needed to support both peacetime and wartime requirements. Management and oversight of Army inventory is a responsibility shared between the Offices of the Secretary of Defense and the Secretary of the Army. The Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible for the uniform implementation of DOD inventory management policies throughout the department, while the Secretary of the Army is responsible for implementing DOD inventory policies and procedures. Army inventory management is primarily the responsibility of the Army Materiel Command, and inventory management functions are performed at subordinate commands, namely TACOM, AMCOM, and CECOM. The Army prescribes guidance and procedural instructions for computing requirements for its secondary inventory. Army managers are responsible for developing inventory management plans for their assigned items, to include coordinating all purchase and repair decisions. DOD annual stratification reports show that for the 4 years covered in our review, the value of the Army’s secondary inventory increased both in total dollars and as a percentage of DOD’s overall secondary inventory (see table 1). While the total reported value of DOD’s secondary inventory decreased by almost $2 billion from fiscal year 2004 to fiscal year 2007, the reported value of the Army’s inventory increased by more than $5 billion. Based on our analysis of AMCOM and TACOM inventories from fiscal year 2004 through fiscal year 2007, the Army’s on-hand inventory increased by about $4 billion, while the Army’s on-order inventory decreased by $1 billion (see table 2). The number of unique items managed by AMCOM and TACOM also increased over that time period, from 59,443 unique items in fiscal year 2004 to 63,504 items in fiscal year 2007. The Army uses a process called requirements determination to calculate the amount of inventory that is needed to be held in storage (on hand) and the amount that should be purchased (on order). This information is used to develop the Army’s budget stratification report showing the amount of inventory allocated to meet specific requirements, including operating and acquisition lead time requirements. Operating requirements include the war reserves authorized for purchase; customer-requisitioned materiel that has not yet been shipped (also known as due-outs); a safety level of reserve to be kept on hand in case of minor interruptions in the resupply process or unpredictable fluctuations in demand; minimum quantities of essential items for which demand cannot normally be predicted (also referred to as numeric stockage objective or insurance items); and inventory reserve sufficient to satisfy demand while broken items are being repaired (also referred to as repair cycle stock). Acquisition lead time requirements include administrative lead time requirements, which refer to inventory reserves sufficient to satisfy demand from the time that the need for replenishment of an item is identified to the time when a contract is awarded for its purchase or an order is placed; and production lead time requirements, which refer to inventory reserves sufficient to satisfy demand from the time when a contract is let or an order is placed for inventory to the time when the item is received. When the combined total of on-hand and on-order inventory for an item drops to a threshold level—called the reorder point—the item manager may place an order for additional inventory of that item, to avoid the risk of the item going out of stock in the Army’s inventory. The reorder point includes both operating requirements and acquisition lead time requirements. An economic order quantity—the amount of inventory that will result in the lowest total costs for ordering and holding inventory—is automatically calculated by a computer program and is added to the order. The reorder point factors in both the demand for inventory items during the reordering period, so that the Army managers can replace items before they go out of stock, and a safety level, to ensure a supply of stock during interruptions in production or repair. A purchase request can be terminated or modified if requirements change. These requirements collectively constitute the requirements objective, which we refer to as the Army’s current requirements in this report. An assessment of the Army’s requirements or requirements determination process falls outside the scope of our review. In accounting for its inventory, the Army uses the stratification process to allocate, or apply, inventory to each requirement category. On-hand inventory in serviceable condition is applied first, followed by on-hand inventory in unserviceable condition. On-order inventory is applied when on-hand inventory is unavailable to be applied to requirements. We refer to situations in which on-hand and on-order inventory are insufficient to satisfy current requirements as inventory deficits. Our analysis of Army secondary inventory data for the 4-year period we examined showed that about $3.6 billion (22 percent) of the average annual total inventory value of $16.3 billion was not needed to meet current requirements. During this time period, the value of on-hand inventory exceeding current requirements increased, whereas the value of on-order inventory that exceeded requirements decreased. During this same time period, the value of Army inventory deficits decreased but remained substantial—an average value of $3.5 billion over the 4-year period. Our analysis of Army secondary inventory data showed that, on average, about $12.7 billion (78 percent) of the total annual inventory value was needed to meet current requirements, whereas $3.6 billion (22 percent) exceeded current requirements. Measured by number of parts, these percentages were similar: 81 percent of the parts applied to current requirements on average each year, and the remaining 19 percent exceeded current requirements. The value of the inventory that exceeded current requirements increased over the period of our review, from $2.9 billion in fiscal year 2004 to $4.4 billion in fiscal year 2007, as did the number of parts that exceeded current requirements, from 5.2 million parts to 10.2 million parts (see table 3). The Army’s total inventory levels increased from fiscal year 2004 to fiscal year 2007, with the greatest increase occurring from fiscal year 2004 to fiscal year 2005. Additionally, the overall proportion of inventory exceeding requirements increased when compared with inventory meeting current requirements (see fig. 1). Both the total value of the Army’s on-hand inventory and the total value of on-hand inventory exceeding current requirements increased. Over the 4-year period, the value of the Army’s on-hand inventory exceeding current requirements averaged $3.5 billion, or 31 percent of total on-hand inventory (see table 4). The Army’s forecasts for items with a recurring demand in fiscal years 2005 through 2007 showed that supplies for some of the on-hand inventory that exceeded current requirements were sufficient to meet many years and sometimes decades of demand. In addition, a substantial amount of the Army’s on-hand inventory showed no projected demand. The results of this analysis are shown in figure 2. As shown in figure 2, about $900 million (22 percent) of the on-hand inventory exceeding current requirements in fiscal year 2007 would be sufficient to satisfy 2 years of demand, $1.1 billion (26 percent) would be sufficient to meet demands for 2 to 10 years, $750 million (18 percent) would be sufficient to meet demands for 10 to 50 years, and $600 million (14 percent) would be sufficient to meet demands for 50 years or more. In addition, the Army in fiscal year 2007 had nearly $900 million (20 percent) of on-hand inventory exceeding current requirements for which there were no forecasted demands. For the 4-year period we reviewed, the value of the Army’s on-order inventory that exceeded current requirements decreased from $150 million in fiscal year 2004 to $110 million in fiscal year 2007. However, because the value of the Army’s on-order inventory also decreased from $5.3 billion in fiscal year 2004 to $4.2 billion in fiscal year 2007, the proportion of Army on-order inventory that exceeded current requirements remained relatively constant (see table 5). For all 4 years, the Army also had some on-order inventory that was designated as potential excess for disposal or reutilization. For example, according to the Army’s fiscal year 2007 stratification report, about $56 million of on-order inventory items were designated as potential excess, meaning that they could be disposed of or reutilized as soon as they were delivered (see table 6). The Army had substantial inventory deficits for some items—that is, an insufficient level of inventory on hand or on order to meet the current requirements. For the 4-year period we reviewed, the Army’s inventory deficits had an average value of $3.5 billion. However, the value of the deficits decreased by 17 percent from $4.1 billion in fiscal year 2004 to approximately $3.4 billion in fiscal year 2007 (see table 7). Although inventory deficits exist, they do not always translate directly into an operational impact. Army officials told us that, in the past, inventories have fallen below current requirements because of unforeseen demands. In those cases, managers were able to use parts that were designated for safety-level requirements in order to minimize the operational impact of the inventory deficit. However, we could not determine the criticality of the Army’s inventory deficits because this information is not available in stratification reporting. Our review of the Army’s secondary inventory identified two factors contributing to the consistent misalignment between inventory levels and current requirements. First, while the Army strives to provide effective supply support to the warfighter and uses metrics such as supply availability to measure performance, it lacks corresponding metrics and goals for assessing and tracking the cost efficiency of its inventory management practices. Inaccurate demand forecasting for spare parts also contributed to the Army having inventory that was in excess of current requirements as well as having inventory deficits. After evaluating its demand forecasting procedures, the Army has issued guidance that the Army expects will improve the accuracy of its forecasts. Because the guidance was issued as we were completing our audit work, we were unable to assess whether the changes to forecasting procedures would be sufficient to address deficiencies. However, these actions are consistent with some of our past recommendations related to inventory management. In addition, we noted during our review that the Army has an opportunity to enhance oversight of inventory management as it develops the roles and responsibilities for the newly designated chief management officer. Although the Army uses a number of methods to manage its secondary inventory, it lacks metrics and goals for assessing and tracking the cost efficiency of its inventory management practices. DOD’s supply chain management regulation requires the military services to take a number of steps to provide for effective and efficient end-to-end materiel support. The regulation also sets out a number of management goals, including sizing secondary item inventories to minimize the DOD investment while providing the inventory needed; considering all costs associated with materiel management in making best-value logistics decisions; balancing the use of all available logistics resources to accomplish timely and quality delivery at the lowest cost; and measuring total supply chain performance based on timely and cost-effective delivery. To ensure efficient and effective supply chain management, the regulation also calls for the use of metrics to evaluate the performance and cost of supply chain operations. These metrics should, among other things, monitor the efficient use of DOD resources and provide a means to assess costs versus benefits of supply chain operations. However, the regulation does not prescribe specific cost metrics and goals that the services should or must use to track and assess the efficiency of their inventory management practices. According to Army officials, the Army has processes and controls for efficiently managing secondary inventory and fulfilling the DOD regulation. First, Army officials stated that they use a number of metrics to determine whether the Army provides the inventory needed, including customer wait time, back orders, stock availability, and the not- mission- capable supply rate, which counts the number of vehicles or aircraft that cannot perform the Army’s mission due to a lack of parts. Second, the Army uses a cost differential model to determine the appropriate level of inventory to maintain in order to achieve a desired performance goal. The model is based on a number of variables, including procurement costs, holding costs, frequency of demand, implied stockage cost, and the probability of future demand. Army officials also stated that cost minimization is integral in the formulas used to compute requirements. Third, the Army assesses the effectiveness of inventory by evaluating the Army Working Capital Fund. Specifically, if sales from the fund to customers match the values of inventory purchased, then inventory purchases have been cost effective. While these methods may be effective management tools, we found that the Army has not established metrics and goals for measuring the cost efficiency of its inventory management. In the absence of such metrics and goals, Army officials lack an effective means for assessing whether inventory is being managed as efficiently as possible and for tracking trends and the impact of any corrective actions. As discussed in this report, we determined that the Army had substantial amounts of inventory that exceeded requirements for all 4 years of our review. However, the consistent misalignment between inventory levels and current requirements are not readily revealed by the Army’s current methods for measuring inventory management. The overall secondary inventory data we analyzed show that the Army carried about $1.29 in inventory for every $1 in requirements to meet its goals during the 4-year period of fiscal years 2004 through 2007. Such a metric, in combination with other cost metrics and established goals, could provide the Army with a capability to track trends and assess progress toward achieving greater cost efficiency. Our review showed that demand forecasting for spare parts has been inaccurate. According to the Army regulation on centralized management of the Army supply system, the Army uses a computer model to forecast its spare parts requirements. The model uses the average monthly demand over the previous 24 months as a baseline, and it allows the demand forecast to be modified to account for expected future usage. Army officials stated that when demand data does not accurately reflect usage or forecasts for future usage are incorrect, the result is a misalignment between inventory and current requirements. For example, Army officials stated that at the beginning of the global war on terrorism, the average monthly demand was based on a peacetime operations tempo, which did not accurately reflect a wartime usage of items. They also stated that they did not always have complete or accurate information on the amounts or types of weapon systems to be used in the global war on terrorism, so they modified the demand forecast to account for expected future usage based on speculation. As a result, inventory did not always align with requirements. Army managers who responded to our survey most frequently cited changes in demand as the reason inventory did not align with current requirements. Demand may decrease, fluctuate, or not materialize at all, resulting in inventory exceeding current requirements; conversely, it may increase, resulting in inventory deficits. Table 8 shows the results of our representative survey of items with inventory excesses (160 items), and table 9 shows the results of our survey for items with inventory deficits (56 items). Responses categorized as “other” varied but included issues related to lack of data, obsolescence, or other explanations of demand changes. For example, Army managers stated that the 2005 Base Realignment and Closure (BRAC) Commission recommended a supply transfer of consumable items from the Army to DLA that was under way during the time of our review. Army managers who participated in the survey could not provide information on some of these items because prior data was not retained. Our discussions with Army managers provided examples that illustrate the challenges they face in predicting demands for items due to changes in plans, policy, or repair schedules: In anticipation of higher usage, the Army purchased an additional 95 parts of a calibration tool that supports the UH-60 Black Hawk Helicopter. However, because the increased usage did not occur, in fiscal year 2007, the Army had 130 parts that exceeded current requirements, valued at $7.4 million. Conversely, an unanticipated increase in operational demand led to an inventory deficit of an item that supports the OH-58D Kiowa Warrior helicopter. This helicopter had higher-than-expected usage, which increased the need for repairs and replacements through procurement. In fiscal year 2007, the Army had an inventory deficit of 128 parts, valued at $1.2 million. A change in an overhaul repair program for a shipping and storage container used to store and transport the drive shaft for the M1 Abrams Tank resulted in excess inventory. As stated by an Army manager with whom we spoke and according to Army records, in fiscal year 2007, the Army had 272 on-hand units, valued at over $0.4 million, that exceeded current requirements because the Army’s delay of the overhaul repair program for the Abrams Tank caused demands not to materialize. Having identified a defect in some of the batteries used on the Patriot Missile System, the Army procured 350 new batteries. While awaiting production, however, the Army developed a repair for the defective batteries. The Army could not cancel the procurement order, resulting in an on-hand excess of 619 items, valued at about $0.6 million. Another example of multiple supply sources resulting in excess inventory concerns the corner actuator used to support the hydraulic suspension and steering for the M9 Armored Combat Earthmover vehicle. The Army made an emergency purchase from a sole source contractor to ensure that sufficient parts would be available while it concurrently developed a repair program. The purchases and repaired assets increased on-hand inventory beyond current requirements, resulting in an excess quantity of 836 parts, valued at $7.7 million. Army officials stated that forecasts rely heavily on accurate demand rates and relatively stable demand data. They stated in June that, since demand rates had achieved some stability, forecasts had improved. In the future, however—particularly as operations in Southwest Asia decrease—-they indicated that they expect to see more difficulties in accurately forecasting future demands for parts. The Army has taken steps designed to improve its inventory management. In January 2008, the Army began an evaluation of its secondary inventory management processes. Army officials stated that the impetus for the review was the need to manage the effects of the Army’s increased operations tempo, which had resulted in higher usage of secondary inventory. However, because the duration of the heightened operations tempo was unknown, the Army wanted to improve its forecasting processes to better account for a changing operational environment. As part of its supply planning assumptions for fiscal year 2009, the Army shortened the forecast period used by managers to determine procurement decisions. The Army issued guidance in October 2008 directing inventory managers to set a forecast period using the previous 6 months for missiles and the previous 12 months for all other secondary items. Army officials stated that, based on their evaluation, shortening the forecast period from the previous 24 months would provide managers the ability to better capture changing demand patterns, allowing them to adjust their purchase decisions to accommodate new force patterns. Army officials believe that shortening the forecast period should help capture changes to demand in a more real-time fashion. The Army’s guidance also directs managers to update forecast models based on the readiness portion of the Army Operations Update to match actual quantities of weapon systems being used in Southwest Asia. According to Army officials, previous models were updated based on estimates that were not always timely or accurate. Army officials stated that the readiness portion of the Army Operations Update reflects the actual quantities of weapons systems as reported by commanders in Southwest Asia. Army officials believe that these changes should provide more accurate and timely information to item managers, allowing for better purchase decisions. The Army guidance was issued as we were completing our audit work. Therefore, we were unable to assess whether these changes to the forecasting model will be sufficient to address this long-standing problem. Since early 1990, when we began reporting on this issue, inaccurate demand forecasts have consistently been identified as a key cause for DOD’s inventory not aligning with requirements. The actions directed by the Army could address some of these challenges, and they have been consistent with recommendations we made in our prior work. In our report on the Air Force’s management of spare parts, we recommended that the Air Force evaluate reasons for decreases in demand and determine actions needed to address these decreases. The Army’s evaluation of decreases in demand has identified the 24-month forecast period as a contributing factor, and its new guidance constitutes a step toward addressing the issue. We also recommended in a previous report on critical parts shortages that the Army should provide item managers with operational information in a timely manner so managers can adjust their requirements forecasting. The Army’s guidance directing managers to use actual quantities of weapon systems as reported in the readiness portion of the Army Operations Update constitutes another step toward addressing this issue. Army officials stated that the primary purpose of the guidance was to improve the performance of inventory rather than to reduce the amount of inventory that exceeds requirements. While Army officials expect that improved forecasting could result in reductions in excess inventory, the Army has yet to develop processes to measure the effectiveness of these actions on reducing excess inventory. The Army has an opportunity to increase its ability to provide oversight of inventory management. Recently, the Army established a chief management officer for business transformation. However, it has not defined whether and how the chief management officer will have a role overseeing inventory management improvement. The costs of DOD’s business operations have been of continuing concern. In April 2008, for example, the Defense Business Board noted that DOD had not aggressively reduced the overhead costs related to supporting the warfighter, which accounted for about 42 percent of DOD’s total spending each year. The Defense Business Board recommended that DOD align strategies to focus on reducing overhead while supporting the warfighter. In May 2007, DOD established a chief management officer position with responsibility for ensuring that business transformation policies and programs are designed and managed to improve performance standards, economy and efficiency. In 2008, the Army designated the Under Secretary of the Army as its chief management officer responsible for business transformation. Although the role of the Army’s chief management officer is still being developed, according to existing Army guidance, one of the Under Secretary of the Army’s roles was to provide oversight of policy, planning, coordination, and execution of matters related to logistics. However, it is unclear whether inventory management was included as part of this existing oversight. The substantial value of the Army’s inventory and the systemic challenges that we have identified since the early 1990s suggest that inventory management can be improved. Accordingly, the new designation of the chief management officer provides the Army an opportunity to enhance oversight of inventory management, as well as gauge the effectiveness of inventory management improvement efforts. The Army accumulates high levels of secondary inventory each year that exceed current requirements without justifying that these inventory levels are sized to minimize DOD’s investment. When the Army invests in the purchase of inventory items that become excess to its requirements, these funds are not available to meet other military needs. Taking steps to reduce the high levels of inventory exceeding requirements could help to ensure that DOD is meeting supply performance goals at least cost. Among other things, cost-efficiency metrics and goals that reveal the existence of inventory excesses and deficits could provide a basis for effective management and oversight of inventory reduction efforts. Much of the inventory that exceeded current requirements or had inventory deficits resulted from inaccurate demand forecasts. To its credit, the Army has evaluated the unpredictability of demand and has taken steps that it believes will enhance flexibility in adapting to fluctuations in demand. Implementation of the plan, evaluation of the results, and continued monitoring could also assist in addressing this long-standing problem. Finally, since inventory management is part of the Army’s broader business operations and transformation, it is reasonable to expect the newly established chief management officer to exercise some level of oversight of inventory management improvement efforts taken by the Army. Strengthening the Army’s inventory management—while maintaining high levels of supply availability and meeting warfighter needs—could reduce support costs and free up funds for other needs. To improve the management of the Army’s secondary inventory, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following three actions: Establish metrics and goals for tracking and assessing the cost efficiency of inventory management and incorporate these into existing management and oversight processes. Evaluate the effectiveness of changes to demand forecasting procedures that were set forth in the Army’s October 2008 guidance, including measuring the impact on reducing inventory that exceeds requirements, and based on that evaluation, take additional actions as appropriate to identify and correct systemic weaknesses in forecasting procedures. Monitor the effectiveness of providing item managers with operational information in a timely manner so they can adjust modeled requirements as necessary. We also recommend that the Secretary of the Army direct the Army’s Chief Management Officer to exercise oversight of Army inventory management improvements to align improvement efforts with overall business transformation and to reduce support costs. This oversight role should not replace or eliminate existing operational oversight responsibilities for inventory management that are exercised by other Army offices, but should ensure that the Army maintains a long-term focus for making systemic improvements where needed and for strategically aligning such changes with overall transformation efforts. In its written comments on a draft of this report, DOD agreed with three of our recommendations and disagreed with one recommendation. On the basis of DOD’s comments, we have modified one of our recommendations. The department’s written comments are reprinted in appendix II. DOD agreed with our recommendation that the Army establish metrics and goals for tracking and assessing the cost efficiency of inventory management. However, DOD did not provide information on planned corrective actions. According to DOD, the Army has already established inventory metrics and readiness goals which it evaluates during periodic reviews. DOD also stated that the Army’s primary inventory goals are to achieve high stock availability and low non-mission-capable supply rates for its warfighting systems and capabilities, and that the Army has current inventory metrics that mirror those in commercial inventory management. While the metrics cited by DOD in its response may be useful tools for assessing cost efficiency, we could not determine on the basis of our review that the Army was using these or other metrics to track and assess cost efficiency and to make management decisions aimed at improving cost efficiency. DOD, in its written comments, also did not provide information on how the Army may be using existing metrics to improve cost efficiency. Therefore, we continue to believe that the Army should place a greater emphasis on setting cost efficiency goals, measuring progress, and establishing accountability for cost efficiency through its existing management and oversight processes. DOD concurred with our recommendations that the Army evaluate the effectiveness of changes to demand forecasting procedures that were set forth in the Army’s October 2008 guidance and that the Army monitor the effectiveness of providing item managers with operational information in a timely manner. According to DOD, the Army will evaluate the effectiveness of its corrective actions beginning in August 2009, again in February 2010, and periodically thereafter during quarterly reviews. We believe this action is responsive to these recommendations. DOD disagreed with our recommendation that the Secretary of the Army direct the Army’s Chief Management Officer to exercise oversight of Army inventory management improvements to align improvement efforts with overall business transformation and to reduce support costs. DOD stated that inventory oversight is the operational responsibility of the Army’s Life Cycle Management Commands and appropriately assigned under the combined oversight of the Army G-4, the Assistant Secretary of the Army, Financial Management and Comptroller, and the Army Materiel Command. DOD also stated that the Under Secretary of the Army, as the Chief Management Officer, at the department-level, synchronizes strategic systems and processes across the enterprise. We do not dispute the need to maintain existing oversight responsibilities for Army inventory management, and we have modified our recommendation to make this clear. However, we disagree with DOD’s position that the Army’s Chief Management Officer should not have an oversight role. First, the existing combined oversight shared by Army staff and the Army Materiel Command may not be sufficient to ensure long-term change. As we stated previously, for the 4-year period of our review, the Army’s inventory exceeded current requirements by $3.6 billion. While we are encouraged that the Army has taken steps designed to improve inventory management, these steps have occurred only recently compared to the systemic challenges related to inventory management that we have reported on since the 1990s. Given the substantial value of the Army’s inventory, exercising oversight of inventory management is essential, and assigning additional oversight responsibility to a department-level official, such as the Chief Management Officer, could ensure that a continuous focus is maintained. Additionally, since the Army’s Chief Management Officer operates at the department- level and is responsible for synchronizing strategic systems and processes across the enterprise, this individual would be uniquely suited to exercise oversight as part of the Army’s broader business transformation efforts. Finally, directing the Army’s Chief Management Officer to exercise oversight of Army inventory management improvement efforts could make oversight operations more uniform across the Department of Defense. In its written response to our review of the Navy’s inventory management, DOD stated that the Navy is developing a business transformation implementation strategy to align with Office of the Secretary of Defense actions in this area, and that the Navy will determine the appropriate role its Chief Management Officer should exercise in inventory management oversight. Accordingly, we continue to believe that our recommendation has merit. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretary of the Army; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions concerning this report, please contact me on (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Army’s on-hand and on-order secondary inventory reflects the amount of inventory needed to support current requirements, we obtained the Central Secondary Item Stratification Budget Summary and item-specific reports for the Army’s Aviation and Missile Command (AMCOM) and the Tank-automotive and Armaments Command (TACOM), including summary reports and item- specific data as of September 30 for fiscal years 2004 through 2007. Our analysis did not include the Army’s Communication and Electronics Command (CECOM) because the information system used to manage secondary inventory was not able to provide item-specific data for the period of our review. Stratification reports serve as a budget request preparation tool and a mechanism for matching assets to requirements. Our analysis was based on analyzing the Army’s item stratifications within the opening position table of the Central Secondary Item Stratification Reports. To validate the data in the budget stratification reports, we generated summary reports using electronic data and verified our totals against the summary stratification reports obtained from the Army. The Army secondary inventory data are identified by unique stock numbers for each spare part, such as an engine for a particular vehicle, which we refer to as unique items. The Army may have in its inventory multiple quantities of each unique item, which we refer to as individual parts. We calculated the value of each unique item by multiplying the quantity of the item’s individual parts by the item’s unit price, which is the latest acquisition cost for the item. After discussing the results with Army officials, we determined that the data were sufficiently reliable for the purposes of our analysis and findings. Upon completion of the data validation process, we revalued the Army’s secondary inventory items identified in its budget stratification summary reports because these reports value useable items and items in need of repair at the same rate, and do not take into account the repair cost of repairing broken items. We computed the new value for items in need of repair by subtracting repair costs from the unit price for each item. We also removed overhead charges from the value of each item. In presenting the value of inventory in this report, we converted then-year dollars to constant fiscal year 2007 dollars using Department of Defense (DOD) Operations and Maintenance price deflators. We consider the Army to have inventory exceeding current requirements if it has more inventory than is needed to satisfy its requirements based on the opening position table of the Army’s budget stratification report. Collectively, these requirements are referred to by DOD as the “requirements objective,” defined as the maximum authorized quantity of stock for an item. However, if the Army has more inventory on hand or on order than is needed to satisfy its requirements, it does not consider the inventory beyond the requirements to be unneeded. Instead, the Army uses the inventory that is beyond its requirements to satisfy future demands over a 2-year period, economic retention requirements, and contingency retention requirements. Only after applying inventory to satisfy these additional requirements would the Army consider that it has more inventory than is needed and would consider this inventory for potential reutilization or disposal. In commenting on our past reports, DOD and the other services have disagreed with our definition of inventory that was not needed to satisfy current operating requirements because it differed from the definition that is used for the inventory budget process. We do not agree with the Army’s practice of not identifying inventory used to satisfy these additional requirements as excess because it overstates the amount of inventory needed to be on hand or on order by billions of dollars. The Army’s requirements determination process does not consider these additional requirements when it calculates the amount of inventory needed to be on hand or on order, which means that if the Army did not have enough inventory on hand or on order to satisfy these additional requirements, the requirements determination process would not result in additional inventory being purchased to satisfy these requirements. We consider the Army to have inventory deficits if levels of on-hand and on-order inventory are insufficient to meet the requirements objective. To determine the extent to which the Army’s on-order and on-hand secondary inventory reflects the amount of inventory needed to support requirements, we reviewed DOD and Army inventory management guidance, past GAO products on DOD and Army inventory management practices for secondary inventory items, and other related documentation. We also created a database which compared the Army’s current inventory to its current requirements and computed the amount and value of secondary inventory exceeding or not meeting current requirements. Additionally, to understand whether the inventory not needed to support requirements had improved in relation to its years of supply, we calculated the number of supply years a given item would have based on its quantity and demand at the time of stratification in September 2005, September 2006, and September 2007. We developed a survey to estimate the frequency of reasons why the Army maintained items in inventory that were not needed to support requirements or that did not meet requirements. The survey asked general questions about the higher assembly (component parts) and/or weapon systems that the items support, and the date of the last purchase. In addition, we asked survey respondents to identify the reason(s) for having inventory that exceeded current requirements or had an inventory deficit. We provided potential reasons as responses from which they could select based on reasons identified in some of our prior work. Since the list was not exhaustive, we provided an open-ended response option to allow other reasons to be provided. In addition to expert technical review of the questionnaire by an independent methodologist, we conducted pretests with Army managers from TACOM and AMCOM prior to sending out the final survey instrument. We revised the survey instrument accordingly based on findings from the pretests. We sent this questionnaire electronically to specific Army managers in charge of sampled unique items at two of the Army’s inventory control point locations in Huntsville, Alabama and Warren, Michigan. To estimate the frequency of reasons for inventory not needed to meet requirements and inventory deficits, we drew a stratified random probability sample of 220 unique items—153 unique secondary inventory items not needed to support requirements and 67 with inventory deficits—from a study population of 45,007 items—30,222 with inventory not needed to meet requirements and 14,785 with inventory deficits. Based on our analysis of the Army stratification data, for fiscal year 2007, there were 26,535 unique items with on-hand inventory not needed to meet requirements, and 3,687 unique items with on-order inventory not needed to meet requirements. These categories identified a combined value of $4.4 billion of inventory not needed to meet requirements. All of these items met our criteria to be included in our study population of items not needed to meet requirements. Additionally, based on our analysis of stratification data, all of the 14,785 unique items with inventory deficits, valued at $3.4 billion, met our criteria to be included in our deficit study population. We sent 216 electronic questionnaires—one questionnaire for each item in the sample—to the 131 Army managers identified as being responsible for these items. Four of the items in our sample were determined to be out of scope, because three items did not have item managers and had low quantities and values associated, and one item was randomly selected at two commands, so the item was removed from one command and left for the other command with a higher quantity to answer. Table 10 divides TACOM and AMCOM’s on-hand excess, on-order excess and deficit inventory into three substratum, each by the amount of supply for Fiscal Year 2007. The divisions of the population, sample, and respondents across the strata are also shown in table 10. We received 187 responses for the questionnaire. Each sampled item was subsequently weighted in the final analysis to represent all the members of the target in- scope population. At the time of this review, the Army was undergoing secondary inventory supply transfer actions as a part of a larger 2005 Base Realignment and Closure (BRAC) recommendation. In our survey of 216 items, we identified 38 items that were a part of this supply transfer to the Defense Logistics Agency (DLA). Most item managers overseeing these previously Army-managed items stated that they no longer retained the data to complete our survey; therefore, these DLA-transferred items are reflected in the “other” category of our sample results in tables 8 and 9. Because we followed a probability procedure based on random selections, our sample of unique items is only one of a large number of samples that we might have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results in 95 percent confidence intervals. These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. In addition to sampling errors, the practical difficulties of conducting any questionnaire may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the questionnaire results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. We reviewed each questionnaire to identify unusual, incomplete, or inconsistent responses and followed up with Army item managers by telephone and e-mail to clarify those responses. In addition, we performed computer analyses to identify inconsistencies and other indicators of errors and had a second independent reviewer for the data analysis to further minimize such error. To determine reasons for the types of answers given in the questionnaires, we held 30 face-to-face discussions with Army inventory managers, of which 14 were in our sample. We judgmentally selected some TACOM and AMCOM items that had unusual or high on-hand, on-order, and deficit inventory. During these discussions we obtained additional detailed comments and documentation related to demand, demand forecasting, acquisitions, retention, and disposal actions. We conducted this performance audit from February 2008 to January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. On the basis of information obtained from the Army on the reliability of its inventory management systems’ data, and the survey results and our follow-up analysis, we believe that the data used in this report were sufficient reliable for reporting purposes. In addition to the contact named above, Thomas Gosling, Assistant Director; Carl Barden; Aisha Cabrer; Jim Melton; Steve Pruitt; Carl Ramirez; Minette Richardson; and Cheryl Weissman made key contributions to this report. | Since 1990, GAO has designated the Department of Defense's (DOD) inventory management as a high-risk area. It is critical that the military services effectively and efficiently manage DOD's secondary inventory to ensure that the warfighter is supplied with the right items at the right time and to maintain good stewardship over the billions of dollars invested in their inventory. GAO reviewed the Army's management of secondary inventory and determined (1) the extent to which on-hand and on-order secondary inventory reflected the amount needed to support current requirements and (2) causes for the Army having secondary inventory that exceeded current requirements or, conversely, for having inventory deficits. To address these objectives, GAO analyzed Army data on secondary inventory (spare parts such as aircraft and tank engines) from fiscal years 2004 through 2007. For the 4-year period GAO examined, the Army had significantly more inventory than was needed to support current requirements. At the same time, the Army had substantial inventory deficits. GAO's analysis of Army data reflected an annual average of about $16.3 billion of secondary inventory for fiscal years 2004 to 2007, of which about $3.6 billion (22 percent) exceeded current requirements. On average, approximately 97 percent of the inventory value exceeding requirements was on hand and the remaining 3 percent was on order. Based on Army demand forecasts, inventory that exceeded current requirements had enough parts on hand for some items to satisfy several years, or even decades, of anticipated supply needs. Also, a large proportion of items that exceeded current requirements had no projected demand. The Army also had an annual average of about $3.5 billion of inventory deficits over this 4-year period. Army inventory did not align with current requirements over this period because of (1) a lack of cost-efficiency metrics and goals and (2) inaccurate demand forecasting. DOD's supply chain management regulation requires the military services to take a number of steps to provide for effective and efficient end-to-end materiel support. For example, the regulation directs the components to size secondary inventory to minimize DOD's investment while providing the inventory needed. Although the Army has supply support performance measures for meeting warfighter needs, it has not established metrics and goals that can measure the cost efficiency of its inventory management practices. Furthermore, the Army's demand forecasts have frequently been inaccurate. The Army uses a computer model to forecast its spare parts requirements, but when demand data are inaccurate or untimely, the result is a misalignment between inventory and current requirements. As a result, the Army has accumulated billions of dollars in excess inventory against current requirements for some items and substantial inventory deficits in other items. Without accurate and timely demand data, managers cannot ensure that their purchasing decisions will result in inventory levels that are sized to minimize DOD's investment needed to support requirements. The Army has acknowledged that challenges exist in its forecasting procedures and has begun to take steps to address shortcomings. In October 2008, the Army issued guidance directing managers to reduce the forecast period from 24 months to 12 months to better account for changes in the size of the force and the resulting changes in demands. The guidance also directs managers to update forecast models to match actual quantities of weapon systems being used in Southwest Asia; previous models were updated based on estimates that were not always timely or accurate. These two changes constitute steps toward improving the accuracy of demand forecasts, but GAO was unable to assess their effectiveness because this guidance was issued as GAO was completing its audit work. Also, the Army's recent designation of the Under Secretary of the Army as its chief management officer responsible for business transformation provides an opportunity for enhanced oversight of inventory management improvement efforts. Strengthening the Army's inventory management--while maintaining high levels of supply availability and meeting warfighter needs--could reduce support costs and free up funds for other needs. |
To obtain local views on the usefulness of federal assistance, we conducted structured interviews with 37 members of local law enforcement agencies that participated in the principal federal anti-violent crime task force for metropolitan Los Angeles. We interviewed 3 levels of employees within the local law enforcement agencies that participated in the task force: 24 participating line officers, 8 supervisory officers, and 5 agency heads or agency representatives. For reporting purposes, we combined the responses of the 8 supervisory officers and the 5 agency heads or representatives into 1 category of 13 responses, which we refer to in this report as responses from local officials. We also conducted structured interviews with representatives of nine local law enforcement agencies that did not participate in the task force to obtain their views on federal anti-gang assistance. In addition, federal and local law enforcement agencies provided statistics on the results of task force efforts. A detailed description of our objectives, scope, and methodology is contained in appendix I. We performed our work in Washington, D.C., and Los Angeles, CA, from March 1995 through April 1996 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Attorney General. Responsible Department of Justice officials provided comments, which are discussed at the end of this letter. “Our next step in the fight against crime is to take on gangs the way we once took on the Mob. I am directing the FBI and other investigative agencies to target gangs that involve juveniles in violent crime and to seek authority to prosecute, as adults, teenagers who maim and kill like adults.” “Not too long ago, the Federal Government believed that street crime was not its business, but today, we recognize that violent gang crime is a national problem and one that we must do our share to address.” In this regard, Title XV of the Violent Crime Control and Law Enforcement Act of 1994, P.L. 103-322 (1994), strengthened federal laws dealing with criminal street gangs. Also, Congress has funded federal efforts to assist state and local law enforcement in fighting violent crime. According to DOJ, the nationwide growth in violent crime can be tied closely to the development of gangs. Although definitive statistics were not available, law enforcement professionals believed that gang violence was a factor—and perhaps the primary factor—in the increase in violent crime during the past decade. DOJ’s 1995 report on its anti-violent crime initiative emphasized that violent gang members threatened the safety and stability of neighborhoods, inflicted fear and bodily harm on others through the commission of crime, and robbed residents of the ability to enjoy their streets and homes. Many jurisdictions had focused their efforts on dismantling violent criminal gangs. The Los Angeles District Attorney’s Office estimated that in May 1992 there were 1,000 gangs with 150,000 members in Los Angeles County. The District Attorney also reported in 1992 that gangs had been responsible for virtually all growth in the number of homicides since 1984, and that half of all gang members participate in violence. In addition, Los Angeles-based gangs have migrated to other communities around the country, according to studies sponsored by the National Institutes of Justice and the National Drug Intelligence Center. The LA Task Force grew out of the Los Angeles riots of 1992 as federal and local law enforcement combined resources to address gang violence. According to the FBI and other sources, much of the damage caused in the riots could be attributed to acts instigated by specific gangs. Recognizing the seriousness of the problem, the FBI made the development of a joint federal, state, and local effort to fight gang violence a major emphasis of its anti-violent crime strategy for the Central District of California, which includes Los Angeles. This strategy, which was developed primarily by the FBI agent in charge of the LA Task Force, emphasized targeting violent gangs in neighborhoods with high rates of violent crime. The LA Task Force was formalized in October 1992 by written agreement between the FBI and participating local law enforcement agencies covering, among other things, roles and responsibilities. An FBI representative was to assume the role of program manager for all task force operations and was to receive input from leaders of the participating agencies. The FBI was to provide necessary resources for the task force, including vehicles, when requested and if possible. The original agreement included the FBI; the Bureau of Alcohol, Tobacco and Firearms (ATF); the Immigration and Naturalization Service; the Compton, Inglewood, Long Beach, and Los Angeles Police Departments; and the Los Angeles County Sheriff’s Department. The original mission of the LA Task Force was to identify and prosecute those individuals responsible for committing violent crimes during the 1992 riots. There was particular emphasis on perpetrators associated with violence-prone street gangs, especially gang leaders and core members. After completing its efforts related to the riots, the LA Task Force’s mission was broadened to include the identification and prosecution of the most criminally active and violent individuals and enterprises in the Los Angeles metropolitan area, but the emphasis on gang-related violence was maintained. The total authorized fiscal year 1996 budget for the LA Task Force was almost $394,000. This did not include undisclosed confidential expenditures for specific investigations. According to the various federal and local law enforcement personnel in the Los Angeles area whom we interviewed, federal law enforcement assistance targeted directly at gangs in the area consisted primarily of the use of federal laws and authority not otherwise available to local law enforcement agencies, funds, equipment, and personnel. Such assistance was provided principally through task forces of federal and local law enforcement officers—mainly the FBI-led LA Task Force. This task force consisted of several squads, each of which targeted a specific crime problem, such as fugitives, or a specific crime, such as bank robbery. Within their areas, most squads targeted gangs that committed violent crimes, and each squad usually focused on a different gang. Five of the 47 law enforcement agencies we identified in the Los Angeles metropolitan area participated in the LA Task Force. Assistance provided through the LA Task Force included the use of federal laws and authority (including prosecutive, wiretap, and witness security assistance); overtime pay; office space; various types of equipment; personnel; and money for undercover drug/firearms purchases and informants. For example, according to DOJ and FBI officials, FBI expenses approved (in September 1995) for fiscal year 1996 in support of state/local officers participating on the LA Task Force included the following. Rental and maintenance expenses for 36 automobiles at a total cost of $298,350. Rental expenses for 120 pagers at total cost of $8,740. Rental expenses for 48 cellular phones and associated airtime at a total cost of $77,760. Expenses of $9,052 for the operation of covert telephone lines and the maintenance of various task force equipment. In addition, the FBI reimbursed about $80,000 to state/local agencies in the Los Angeles area to provide for the payment of overtime to officers participating on the LA Task Force. The FBI agent in charge of the task force said that nonparticipating agencies generally did not receive the amount and types of FBI assistance available to the five agencies formally participating in the task force. However, according to other FBI officials, training, forensic services, fugitive apprehension, and various other specialized types of assistance could be made available to any local law enforcement agency through less formal, “as needed” arrangements. Besides the assistance provided directly through the task force, 7 of the 13 local law enforcement officials we interviewed said their agencies received other federal law enforcement assistance, such as training. Six of the seven officials indicated that obtaining such assistance was facilitated directly by their agency’s participation in the task force. As previously noted, 5 of the 47 local law enforcement agencies in the Los Angeles metropolitan area that we identified participated in the LA Task Force. The FBI agent-in-charge of the LA Task Force stated that given its resource constraints, the FBI, in the wake of the Los Angeles riots of 1992, tried to target those localities that had the greatest gang problems and where it believed its resources could have the most impact. According to the agent-in-charge, the five participating agencies were selected on this basis. The majority of local law enforcement officials we contacted believed that the FBI had selected the appropriate targets and expressed no concerns about not having been invited, or being able, to participate. We contacted representatives from nine agencies in the Central District whose jurisdictions had relatively high rates of violent crime but were not participating in the task force. Seven of the nine agencies’ representatives stated that either their gang problems did not warrant federal task force involvement or that their agencies did not have the resources to participate in a task force even if they would have been invited, or had wanted, to participate. The remaining two agencies’ representatives indicated that they had gang problems and expressed interest in participating in the task force, given the opportunity. Also, seven of the nine representatives expressed the belief that if their agencies needed federal assistance on a gang problem, it would be available from the local FBI office on an as-needed basis. The remaining two agency representatives had no opinion. Even those local agencies that were involved in the LA Task Force could not always participate fully because of resource constraints. For example, one local law enforcement agency said it had to reduce the personnel committed to the task force from 60 officers to 15 officers. A representative of this agency stated that this reduction represented tight budgetary conditions in the agency, not dissatisfaction with task force results, and that many of these officers had been reassigned to community policing efforts, which were a higher priority for the agency. Another local law enforcement agency intended to completely withdraw from the LA Task Force due to budget restrictions, but the FBI persuaded the agency to continue because the agency’s participation was critical in completing an anti-gang effort. In addition to the FBI-led LA Task Force, ATF, according to agency officials, provided direct assistance, such as personnel and equipment, to local law enforcement agencies combating gangs, using federal firearms laws and other laws against gang members. ATF’s efforts, according to the officials, were smaller than the FBI’s and less formal in that they did not always involve formal task forces. In this regard, they said that ATF-led task forces, in contrast to the FBI’s LA Task Force, usually targeted a specific local gang problem and consisted of one or two ATF agents working with local police. Fifteen of the 18 local line officers we interviewed who expressed an opinion felt that the LA Task Force met their overall needs to a great extent. We questioned them about the usefulness of specific categories of federal assistance provided through the task force. About three-fourths of the line officers indicated that the assistance was very useful in 8 of 11 categories of assistance. (See app. II for the line officers’ perceptions of the usefulness of the specific categories of assistance.) Of all the types of assistance received through the task force, the line officers were most satisfied with wiretap assistance, money received to pay informants, and funding for drug or gun purchases in undercover operations. The 16 who received wiretap assistance said they found it to be very useful. They cited the value of information gained through wiretaps and the difficulty of doing them at the state level. Some line officers stated that their investigations could not have been completed without wiretaps. The line officers we interviewed expressed some concerns about the personnel assistance and equipment they received, as well as about federal prosecution of targeted gang members, although more than half believed that such assistance was very useful. In regard to personnel assistance, 10 of the 23 line officers who received such assistance believed that the number of FBI agents assigned to their squads was insufficient, and 5 believed that turnover in FBI agents assigned to their squads hindered task force operations. For example, one line officer reported that some FBI agents were assigned to his squad for only 6 months, which was not long enough for an agent to gain an informant’s trust and work effectively with him. Eight line officers also said FBI agents’ lack of street experience hindered task force operations. Four line officers expressed concern that some of the FBI agents who participated in the task force were not interested in working gang cases. When we asked Los Angeles FBI officials about task force agents’ interest in anti-violent crime work, they said the FBI tried to assign agents to areas that interest them, but it was not always possible to give them their first choice. They acknowledged that new agents may not always be suited to violent crime work and that the office was attempting to recruit agents who were interested in gang work from other FBI offices. The FBI officials noted, however, that it is important for new agents to gain some task force experience so that they can effectively replace experienced task force agents who “burn out.” The officials also told us that although new agents are limited in the tasks they can perform, they can contribute to task force operations by assisting in arrests or completing paperwork. Although, overall, line officers believed the equipment they received through the task force was very useful, several felt that some of the equipment they received did not fully meet their needs, either in terms of quantity or quality. For example, they believed that cellular phones—which were critical to their work because they provided a constant and reliable means of communication with informants and other task force members—were not available in sufficient quantity. In this regard, one local officer noted that task force members were on call 24 hours a day and that cellular phones allowed informants to call them at home during off hours without requiring task force members to give informants their home phone numbers, which might compromise the officers’ personal safety. Another line officer told us that the lack of a cellular phone caused him to miss an opportunity to apprehend a murder suspect. Some line officers said they had either bought their own cellular phones or that phone costs often exceeded the FBI’s reimbursable limit. FBI officials acknowledged that the lack of cellular phones was a serious safety issue, but they said that the FBI lacked the funds to equip every task force member with a cellular phone. One official stated that the FBI Los Angeles Office was following FBI guidelines, which called for providing one cellular phone for every three FBI agents and task force members. In fiscal year 1996, the Los Angeles Office was funding 48 cellular phones for use by 143 task force members—a ratio of 1 phone for every 3 members. The official believed one phone for every two task force members would be a better ratio, but that, either way, more phones would be needed in the future due to an expected increase in the number of task force members. Eleven of 19 line officers who worked with the U.S. Attorney’s Office to prosecute gang cases in federal court said that federal prosecution was very useful. Five said that federal prosecution was important to them because federal sentences are much longer in actual time served than state sentences. However, some line officers were critical of the length of time the U.S. Attorney’s Office took to prosecute cases, the amount of evidence they required, and the high district prosecutive thresholds. In response, the Violent Crime Coordinator for the U.S. Attorney’s Office in the Central District of California commented in September 1995 that local law enforcement officers were more familiar with the state prosecutive system, and in contrast federal prosecutions may seem overly slow and require excessive evidence. He said that federal cases often required more preparation time and better evidence to meet federal court standards. Regarding the prosecutive thresholds, he said that the standards for accepting violent crime cases in Central California for federal prosecution generally had become less stringent during the last year and a half and that the U.S. Attorney’s Office was accepting more cases for prosecution. Six of the 13 law enforcement officials we interviewed from agencies that participated in the LA Task Force said that joint federal and local task forces led to better relations and increased cooperation and coordination among law enforcement agencies in general. Eleven of the 13 officials we interviewed said that they had good relationships with the FBI. Many said that current relations with the FBI were the best they had ever been, partly as a result of the LA Task Force. With regard to the previously noted direct assistance provided local law enforcement officials by ATF, the officials we interviewed who worked with ATF were generally satisfied with the assistance they received. Eight of the 13 local law enforcement officials we interviewed generally believed that LA Task Force efforts had reduced gang violence, while 5 believed it was too early to measure the impact. Of the eight who said LA Task Force efforts had reduced gang violence, six believed that task force efforts had had a significant or great impact on gang violence. One official said that his agency could not have achieved the same results without the assistance of the LA Task Force. Local law enforcement line officers who participated in the LA Task Force were also quite positive about current or future task force impact on gang violence. Sixteen of the 22 line officers who expressed an opinion spoke positively about current or future task force impact. Twelve of them believed that LA Task Force efforts had reduced violent gang crime to a great or very great extent. Six others said it was too early in their investigations to say what impact task force efforts would have on violent gang crime, but three of them expected positive results. The 21 local line officers who expressed an opinion stated that their agencies could not obtain similar results without using federal task forces. Twenty-two officers mentioned long-term investigation as an element differentiating the federal task force approach to violent crime from local law enforcement’s approach. Several line officers indicated that long-term investigations permitted local law enforcement to deal more effectively with violent criminal gangs. Federal and local officials also provided us with statistics on the results of task force efforts. These statistics focused on arrests, indictments, and convictions that officials attributed to the LA Task Force’s efforts. FBI statistics showed that from February 1992 through September 1995, the LA Task Force was responsible for 2,086 arrests (918 of which were for violent crimes), 239 federal indictments (161 involving violent crime), and 156 convictions (116 involving violent crimes such as bank robbery). According to FBI statistics, the LA Task Force was also responsible for 119 state convictions, 25 of which involved narcotics violations such as the sale and transportation of cocaine and 94 of which involved violent crimes, such as robbery, murder, and assault with a deadly weapon. Overall, three-fourths of the federal and state convictions were on violent crime charges. The FBI also credited the LA Task Force with drug and firearm seizures and the recovery of assets. Some federal and local officials also credited the LA Task Force with reducing the crime rates in certain neighborhoods. Others credited the LA Task Force with making it safe for children to play outdoors again. We also obtained examples of specific federal anti-gang investigations targeting Los Angeles-based gangs, including five LA Task Force investigations and one ATF investigation. The examples indicated that the LA Task Force had an impact on gangs in Los Angeles and in other communities to which Los Angeles-based gangs had migrated. The examples are described in appendix III. We requested comments on a draft of this report from the Attorney General. A representative of DOJ’s Office of the Assistant Attorney General for Administration informed us at a meeting on July 24, 1996, that comments were requested from the Department’s various headquarters and field office units with responsibility for its operations combating violent crime as described in this report. The representative and officials from DOJ’s Criminal Division, the Executive Office of U.S. Attorneys, and the FBI said that the general consensus of the officials representing those units was that the report, by and large, accurately represented these operations. However, the officials provided additional information concerning DOJ’s monetary commitment to support the LA Task Force during fiscal year 1996. We incorporated the information in this report, where appropriate. We are sending copies of this report to the chairmen and ranking minority members of the Senate Committee on the Judiciary and the Permanent Subcommittee on Investigations, Committee on Governmental Affairs; the chairman and ranking member of the House Committee on the Judiciary; the Secretary of the Treasury; the Director of ATF; the Director of the FBI; the Administrator of the Drug Enforcement Administration; and the heads of the local law enforcement agencies that participated in our study. We also will make copies available to others upon request. The major contributors to this report are listed in appendix IV. If you have any questions concerning this report, please call me on (202) 512-8777. The objective of this self-initiated review was to examine the Department of Justice’s (DOJ) anti-violent crime initiative in the Central Judicial District of California, which covers the Los Angeles area, as it pertained to gang violence. We focused our review on the Los Angeles area because it was one of the areas that had the most gangs and gang members in the country. We focused on the Los Angeles Metropolitan Task Force on Violent Crime (LA Task Force) because it was the primary federal anti-gang effort in the Los Angeles area. Specifically, we wanted to determine and describe how and what federal law enforcement assistance was provided to local law enforcement in the Los Angeles area to fight gang violence, how useful Los Angeles area local law enforcement believed federal assistance was in fighting gang violence, and what results Los Angeles area local law enforcement officials believed were achieved from joint efforts to fight gang violence. Our scope was limited to law enforcement assistance and did not address social programs aimed at preventing or reducing gang violence. To obtain general information on anti-violent crime efforts, we interviewed officials from DOJ headquarters offices, including the Criminal Division, Executive Office of the U.S. Attorneys, and the Federal Bureau of Investigation (FBI). We also interviewed representatives from the Department of the Treasury and the Bureau of Alcohol, Tobacco and Firearms (ATF) headquarters offices. We reviewed DOJ and ATF policy statements on violent crime, including the Attorney General’s National Anti-Violent Crime Strategy and DOJ’s Report on First-Year Accomplishments: Anti-Violent Crime Initiative. We met with representatives from the Office of the U.S. Attorney for the Central District of California and the Los Angeles District Attorney’s Office to discuss federal and local investigative and prosecutive strategies for fighting gang violence. We discussed how federal investigative efforts were coordinated with local efforts, how prosecutive decisions were made on gang cases, and the differences between state and federal approaches to prosecuting gang cases. We also reviewed the Central District’s strategy for fighting violent crime, as directed by the Attorney General’s National Anti-Violent Crime Strategy. In addition, we interviewed the U.S. Attorney’s designated Violent Crime Coordinator to determine how state and local law enforcement agencies participated in the development of the strategies and to discuss the Office’s policy for accepting task force cases for prosecution. To understand how and what federal law enforcement assistance was provided to local law enforcement agencies in Los Angeles to fight gang violence and what results were obtained from joint federal and local efforts, we interviewed representatives from the FBI, the Drug Enforcement Administration, the Immigration and Naturalization Service, and ATF who oversaw federal task force efforts. Our review, however, focused primarily on the anti-gang efforts of the FBI, and to a lesser extent ATF, since federal law enforcement assistance for gang enforcement in Los Angeles at the time of our review came mainly through the FBI-led LA Task Force. We also reviewed data from the FBI and U.S. Attorney’s Office on LA Task Force operations during fiscal years 1992 through 1995, including the number of state and federal arrests, indictments, and prosecutions that resulted from task force operations. We did not independently verify these statistics and cannot attest to their validity. To obtain views of local law enforcement personnel on the usefulness of federal law enforcement assistance in fighting gang violence, we conducted structured interviews with 37 members of the 5 local law enforcement agencies that participated in the LA Task Force. Because local law enforcement personnel’s perceptions on the usefulness of federal assistance varied according to their position and relationship with the federal agencies, we interviewed 3 levels of employees within the local agencies that participated in the task force: 24 participating line officers, 8 supervisory officers, and 5 agency heads or agency representatives. For reporting purposes we combined the responses of the 8 supervisory officers and the 5 agency representatives into 1 category of 13 responses, which we referred to as responses from local officials. We identified a universe of 44 officers who participated in the LA Task Force at the time of our review. In doing so, we counted only those local task force members whose squads specifically targeted violent gangs. Although the squads that we excluded also investigated violent gang members, gangs were not the primary focus of their investigations. According to FBI officials, approximately 60 local law enforcement officers were participating in the LA Task Force at the time of our review. From the universe, we judgmentally selected 24 line officers. To do so, we interviewed all participating officers from three of the five police agencies: the Compton Police Department, the Inglewood Police Department, and the Long Beach Police Department. For the Los Angeles Police Department and the Los Angeles County Sheriff’s Department, the two local agencies that dedicated the most personnel to the task force, our selection of officers was based on several factors, including geographic areas of interest, the gangs they targeted, and the officers’ availability. We also reviewed local law enforcement records on crime rates and task force costs, procedures, and accomplishments during fiscal years 1992 through 1995. We did not independently verify these statistics and cannot attest to their validity. We identified nine local law enforcement agencies in the Central District of California—eight of whose jurisdictions had relatively high rates of violent crime—that did not participate on a federal task force. We conducted structured interviews with agency representatives to determine (1) what types of federal assistance, if any, they requested and received from the federal investigative agencies; (2) how satisfied they were with that assistance; and (3) why their agencies did not participate on a federal task force. We conducted structured interviews with 24 local law enforcement line officers who participated in the LA Task Force. We interviewed at least two officers from each of the five local agencies participating in the task force, except the Inglewood Police Department, which had only one officer on the task force. As shown in table II.1, most officers were quite positive about the assistance they received from the LA Task Force. Table II.1: Summary of Local Los Angeles Area Law Enforcement Officers’ Views of Federal Assistance Assistance received? If yes, how useful was the assistance? Includes coordination and cooperation on criminal investigations between federal and local agencies and/or between local agencies. FBI and ATF officials provided us with information on several federal anti-gang efforts. The following briefly describes five LA Task Force efforts and one ATF investigation. Some of these efforts targeted specific factions or “sets” within a gang or were part of larger investigations directed at a gang over time. This operation was part of a long-term investigation of a Los Angeles gang that figured significantly in the 1992 riots. The FBI began investigating the gang in 1989 and established a joint investigation with a local law enforcement agency in 1992, after the riots. After over 2 years of joint investigation, the FBI and the local agency initiated a widely publicized 1-day anti-gang operation involving about 800 FBI agents and local law enforcement officers, covering a 30-by-30-block neighborhood in South Central Los Angeles. According to the local agency, the gang faction targeted in the effort accounted for less than 1 percent of the community population but was responsible for over 80 percent of the community’s violent crime. The 1-day operation resulted in four federal indictments on charges such as felon in possession of a firearm and possession with intent to distribute. Task force members also seized 67 firearms, about 2,000 rounds of ammunition, and 2 kilos of methamphetamine. Local law enforcement officials also credited the operation with reducing violent crime in the targeted area by 57 percent in the 2 months following the effort. According to police statistics, violent crime (including robbery, attempted murder, rape, kidnapping, aggravated assault, and assault with a deadly weapon) dropped from 262 crimes in the same 2-month period of the preceding year down to 112 crimes. The operation received widespread media attention, with some community residents quoted as being pleased with the Task Force’s efforts and others as being upset with them. FBI officials believed that these efforts were successful, citing, as an example, that gang members went into seclusion after the operation. The officials justified the large amount of personnel resources expended as necessary to ensure officer safety, protect evidence, and apprehend suspects. The local agency that took part in the operation felt that the results of the effort, in terms of the reduction of violent crime, were more significant than what could have been achieved by a local anti-gang squad in 6 months for the same amount of money. One LA Task Force investigation involving gang migration received the 1994 Attorney General’s award for excellence. The investigation involved a gang member who used his Hollywood music studio to facilitate an interstate drug trafficking network. Working with the Denver, CO, FBI office, the LA Task Force was able to wiretap the gang member’s home, business, and cellular phone. Through the wiretaps, the task force learned that the drug trafficking network extended to Milwaukee, WI; Cleveland, OH; Knoxville, TN; Atlanta, GA; Birmingham, AL; Denver; and Seattle, WA. The drug trafficking network was able to make substantial profits by selling its drugs in other cities. For example, rock cocaine that would sell for $20 to $25 in Los Angeles could be sold for $100 in Birmingham. An ounce of cocaine that would sell for $500 in Los Angeles would sell for $1,000 in Birmingham. The LA Task Force’s efforts led to the arrest of the ring’s associates in the cities in which they operated. Two of the gang’s ring leaders and at least four other gang members have been convicted of conspiracy to distribute drugs and of possession and distribution. All are awaiting sentencing. The ring leaders are likely to receive 20-year sentences, while the other four gang members face sentences ranging from 14 to 30 years. This investigation was one of several task force efforts directed against one of Los Angeles’ most notorious and violent street gangs. In this effort, the LA Task Force squad apprehended 2 gang members who led a ring responsible for more than 175 “takeover” bank robberies in the Los Angeles area. The two gang members used juvenile gang members to commit the robberies, supplied them with weapons and plans for carrying out the heists, and kept the bulk of the money for themselves. By showing that the two gang leaders had directed and organized the robberies, the U.S. Attorney’s Office was able to successfully prosecute both ring leaders on federal charges of carjacking, armed bank robbery, and conspiracy to commit armed bank robbery. Both members pled guilty to the charges; one received a 25-year sentence, and the other received a 30-year sentence. FBI officials told us this was an “enormously successful” case because it showed gang members that the federal government was serious about prosecuting gang cases. The number of takeover bank robberies in the Central District was increasing until these gang members were arrested in June 1993; over the next few years, the number of such robberies decreased approximately 57 percent. According to an FBI official, the apprehension of these two gang members was a major factor in the decrease in takeover bank robberies in the Los Angeles area. The fourth effort we reviewed focused on a prison-based gang that also had control over gang activities in local communities. This effort represented a combined federal/local effort to prevent a gang from consolidating and gaining more control over street narcotics sales in the Hispanic community. The effort reflects the federal task force’s proactive approach to gangs, that is, investigating a gang overall to help prevent crime from spreading rather than reacting to the crimes of individual gang members on a case-by-case basis. This effort not only led to federal indictments against 22 defendants but also, according to both FBI and local officials, led to the prevention of over 40 homicides. The U.S. Attorney is pursuing further indictments on the basis of organized criminal activity as well as individual criminal acts. Another investigation by the LA Task Force involved migration by gang members from Long Beach, near Los Angeles, to Spokane, WA. According to a task force member, Long Beach, Compton, and Los Angeles gangs had spread to Spokane, where they faced little or no competition and could make tremendous amounts of money from drug trafficking. When a detective with the Spokane Police Department saw an influx of gang members into Spokane, he accessed the Gang Reporting Evaluation and Tracking (GREAT) database and discovered that many of the gang members were from Long Beach. He contacted the Long Beach Police Department and was referred to the LA Task Force. Task force members arrived in Spokane within 3 or 4 days after being contacted. According to the Spokane Police Department detective, the LA Task Force’s assistance was invaluable. Task force members were very familiar with gang members from Long Beach and were able to provide information on these gang members, including photographs. An LA Task Force member said that task force efforts helped to indict 9 gang members in Spokane on federal charges, while the Spokane Police Department detective said 40 or more indictments were obtained on 7 to 9 gang members, with most indictments being handled at the state level. According to another Spokane law enforcement officer, gang members were given sentences of up to 20 years. According to the Spokane Police Department detective, after the federal indictments, many of the Long Beach gang members fled and gang activity in Spokane dramatically decreased. However, gang activity has gradually increased since then as the LA Task Force squad targeting the Long Beach gang was temporarily discontinued and as gang members adjusted their strategies. An LA Task Force member reported that since termination of the task force squad, the Long Beach gang was suspected of once again sending major amounts of cocaine to Spokane. In another effort, ATF agents worked with local police to target one of the most violent and criminally active street gangs in Los Angeles. This gang distributed phencyclidine (PCP) in California and other states. ATF initiated the investigation by making drug buys from lower level gang members. ATF was able to gain the cooperation of those who had sold them drugs and others charged with firearms violations in targeting higher level gang members. Ultimately, ATF was able to target not only the gang but also the organization that manufactured the PCP. During the 3-year investigation, law enforcement personnel seized 44 firearms, $120 million (street value) worth of PCP, the largest PCP lab site ever seized by law enforcement in the United States, and other assets. Charges against eight defendants, who were gang members or affiliates, included running a continuing criminal enterprise, conspiracy to manufacture a controlled substance, aiding and abetting the manufacture of PCP, and distribution/ possession of a controlled substance. The defendants pled guilty or went to trial and were convicted. Sentences ranged from 17-1/2 to 45 years. Two of three defendants who had not yet been sentenced were scheduled for sentencing in April 1996 and were expected to receive life sentences. Richard R. Griswold, Project Manager Barbara A. Guffy, Site Senior James R. Russell, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed how the Federal Bureau of Investigation (FBI) and other federal agencies worked with local law enforcement agencies to target gangs in the Los Angeles metropolitan area. GAO found that: (1) FBI provided assistance to local law enforcement in the Los Angeles area through the Los Angeles Metropolitan Task Force on Violent Crime; (2) federal assistance provided through the task force included the use of federal laws and authority not otherwise available to local law enforcement, personnel, overtime pay, office space, various types of equipment, and funding for law enforcement activities; (3) local law enforcement officials believed that the task force enhanced their ability to conduct long-term, proactive investigations into entire gangs rather than short-term, reactive investigations; (4) local law enforcement officers believed that, overall, federal assistance helped to reduce gang violence; and (5) local law enforcement officers believed that the number of FBI agents assigned to the task force was insufficient, agent turnover hindered operations, and a lack of cellular telephones hindered operations. |
Each year, we issue well over 1,000 audit and evaluation products to assist the Congress in its decision making and oversight responsibilities. As one indicator of the degree to which the Congress relies on us for information and analysis, GAO officials were called to testify 151 times before committees of the Congress in fiscal year 2001. Our audit and evaluation products issued in fiscal year 2001 contained over 1,560 new recommendations targeting improvements in the economy, efficiency, and effectiveness of federal operations and programs that could yield significant financial and other benefits in the future. History tells us that many of these recommendations will contribute to important improvements. At the end of fiscal year 2001, 79 percent of the recommendations we made 4 years ago had been implemented. We use a 4-year interval because our historical data show that agencies often need this length of time to complete action on our recommendations. Actions on the recommendations in our products have a demonstrable effect on the workings of the federal government. During fiscal year 2001, we recorded hundreds of accomplishments providing financial and other benefits that were achieved based on actions taken by the Congress and federal agencies, and we made numerous other contributions that provided information or recommendations aiding congressional decision making or informing the public debate to a significant extent. For example, our findings and recommendations to improve government operations and reduce costs contributed to legislative and executive actions that yielded over $26.4 billion in measurable financial benefits. We achieve financial benefits when our findings and recommendations are used to make government services more efficient, improve the budgeting and spending of tax dollars, or strengthen the management of federal resources. Not all actions on our findings and recommendations produce measurable financial benefits. We recorded 799 actions that the Congress or executive agencies had taken based on our recommendations to improve the government’s accountability, operations, or services. The actions reported for fiscal year 2001 include actions to combat terrorism, strengthen public safety and consumer protection, improve computer security controls, and establish more effective and efficient government operations. In 1990, we began an effort to identify for the Congress those federal programs, functions, and operations that are most at risk for waste, fraud, abuse, and mismanagement. Every 2 years since 1993, with the beginning of each new Congress, we have published a summary assessment of those high-risk programs, functions, and operations. In 1999, we added the Performance and Accountability Series to identify the major performance and management issues confronting the primary executive branch agencies. In our January 2001 Performance and Accountability Series and High-Risk Update, we identified 97 major management challenges and program risks at 21 federal agencies as well as 22 high-risk areas and the actions needed to address these serious problems. Figure 1 shows the list, as of May 2002, of high-risk issues including the Postal Service’s transformational efforts and long-term outlook, which we added to the high-risk list in April 2001. Congressional leaders, who have historically referred extensively to these series in framing oversight hearing agendas, have strongly urged the administration and individual agencies to develop specific performance goals to address these pervasive problems. In addition, the President’s recently issued management agenda for reforming the federal government mirrors many of the issues that GAO has identified and reported on in these series, including a governmentwide initiative to focus on strategic management of human capital. We will be issuing a new Performance and Accountability Series and High-Risk Update at the start of the new Congress this coming January. The Government Management Reform Act of 1994 requires (1) GAO to annually audit the federal government’s consolidated financial statements and (2) the inspectors general of the 24 major federal agencies to annually audit the agencywide financial statements prepared by those agencies. Consistent with our approach on a full range of management and program issues, our work on the consolidated audit is done in coordination and cooperation with the inspectors general. The Comptroller General reported on March 29, 2002, on the U.S. government’s consolidated financial statements for fiscal years 2001 and 2000. As in the previous 4 fiscal years, we were unable to express an opinion on the consolidated financial statements because of certain material weaknesses in internal control and accounting and reporting issues. These conditions prevented us from being able to provide the Congress and the American citizens an opinion as to whether the consolidated financial statements are fairly stated in conformity with U.S. generally accepted accounting principles. While significant and important progress is being made in addressing the impediments to an opinion on the U.S. government’s consolidated financial statements, fundamental problems continue to (1) hamper the government’s ability to accurately report a significant portion of its assets, liabilities, and costs, (2) affect the government’s ability to accurately measure the full costs and financial performance of certain programs and effectively manage related operations, and (3) significantly impair the government’s ability to adequately safeguard certain significant assets and properly record various transactions. In August 2001, the principals of the Joint Financial Management Improvement Program (JFMIP)—Secretary of the Treasury O’Neill, Office of Management and Budget Director Daniels, Office of Personnel Management Director James, and Comptroller General Walker, head of GAO and chair of the group—began a series of periodic meetings that have resulted in unprecedented substantive deliberations and agreements focused on key financial management reforms issues such as better defining measures for financial management success. These measures include being able to routinely provide timely, accurate, and useful financial information and having no material internal control weaknesses or material noncompliance with applicable laws, regulations, and requirements. In addition, the JFMIP principals have agreed to (1) significantly accelerate financial statement reporting so that the government’s financial statements are more timely and (2) discourage costly efforts designed to obtain unqualified opinions on financial statements without addressing underlying systems challenges. For fiscal year 2004, audited agency financial statements are to be issued no later than November 15, with the U.S. government’s audited consolidated financial statement becoming due by December 15. GAO also issues a wide range of standards, guidance, and management tools intended to assist the Congress and agencies in putting in place the structures, processes, and procedures needed to help avoid problems before they occur or develop into full-blown crises. For example, the Federal Managers’ Financial Integrity Act of 1982 (FMFIA) requires GAO to issue standards for internal control in government. Internal control is an integral part of an organization’s management that provides reasonable assurance that the following objectives are being achieved: effectiveness and efficiency of operations, reliability of financial reporting, and compliance with applicable laws and regulations. As such, the internal control standards that GAO issues provide an overall framework for establishing and maintaining internal control, and identifying and addressing major performance and management challenges and areas at greatest risk to waste, fraud, abuse, and mismanagement. A positive control environment is the foundation for the standards. Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. One factor is the integrity and ethical values maintained and demonstrated by management and staff. Agency management plays a key role in providing leadership in this area, especially setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate. In addition to setting standards for internal control, GAO participates in the setting of the federal government’s accounting standards and is responsible for setting the generally accepted government auditing standards for auditors of federal programs and assistance. GAO also assists congressional and executive branch decision makers by issuing guides and tools for effective public management. For example, in addition to setting standards for internal control, we have issued detailed guidance and management tools to assist agencies in maintaining or implementing effective internal control and, when needed, to help determine what, where, and how improvements can be made. We have also issued guidance for agencies to address the critical governmentwide high-risk challenge of computer security. This work draws on lessons from leading public and private organizations to show the Congress and federal agencies the steps that can be taken to protect the integrity, confidentiality, and availability of the government’s data and the systems it relies on. Similarly, we have published guidance for the Congress and managers on dealing with the other governmentwide high-risk issue— human capital. These guides on human capital are assisting managers in adopting a more strategic approach to the use of their organization’s most important asset—its people. Overall, GAO has undertaken a major effort to identify ways agencies can effectively implement the statutory framework that the Congress has put in place to create a more results-oriented and accountable federal government. GAO has an investigations unit that focuses on investigating and exposing potential criminal misconduct and serious wrongdoing in programs that receive federal funds. The primary mission of this unit is to conduct investigations of alleged violations of federal criminal law and serious wrongdoing and to review law enforcement programs and operations, as requested by the Congress and the Comptroller General. Through investigations, our special investigations team develops examples of misconduct and wrongdoing that illustrate program weaknesses, demonstrate potential for abuse, and provide supporting evidence for GAO recommendations and congressional action. Investigators often work directly with other GAO teams on collaborative efforts that enhance the agency’s overall ability to identify and report on wrongdoing. Key issues in the investigations area are: fraudulent activity and regulatory noncompliance in federal unethical conduct by federal employees and government officials, as well fraud and misconduct in grant, loan, and entitlement programs; adequacy of federal agencies’ security systems, controls, and property as tested through proactive special operations; and integrity of federal law enforcement and investigative programs. One example of these collaborations between our investigations team and audit and evaluations teams is the use of forensic audit techniques to identify instances of fraud, waste, and abuse at various agencies. This approach combines financial auditor and special investigator skills with data mining and file comparison techniques to identify unusual trends and inconsistencies in agency records that may indicate fraudulent or improper activity. For example, by comparing a list of individuals who received government grants and loans to a list of people whose social security numbers indicate they have died, we identified people improperly receiving benefits. Data mining techniques have also been used to identify unusual government purchase card activity that, upon further investigation, were determined to be abusive and improper purchases. Overall, in 2001 GAO referred 61 matters to the Department of Justice and other law enforcement and regulatory agencies for investigation, and its special investigations accounted for $1.8 billion in financial benefits. GAO also maintains a system for receiving reports from the public on waste, fraud, and abuse in federally funded programs. Known as the GAO FraudNET, the system received more than 800 cases in 2001. Reports of alleged mismanagement and wrongdoing covered topics as varied as misappropriation of funds, security violations, and contractor fraud. Most of the matters reported to GAO were referred to inspectors general of the executive branch for further action or information. Other matters that indicate broader problems or systemic issues of congressional interest are referred to GAO’s investigations unit or other GAO teams. | The United States General Accounting Office (GAO) is an independent, professional, nonpartisan agency in the legislative branch that is commonly referred to as the investigative arm of Congress. Congress created GAO in the Budget and Accounting Act of 1921 to assist in the discharge of its core constitutional powers--the power to investigate and oversee the activities of the executive branch, the power to control the use of federal funds, and the power to make laws. All of GAO's efforts on behalf of Congress are guided by three core values: (1) Accountability--GAO helps Congress oversee federal programs and operations to ensure accountability to the American people; (2) Integrity--GAO sets high standards in the conduct of its work. GAO takes a professional, objective, fact-based, non-partisan, nonideological, fair, and balanced approach on all activities; and (3) Reliability--GAO produces high-quality reports, testimonies, briefings, legal opinions, and other products and services that are timely, accurate, useful, clear, and candid. |
Argentina President Carlos Menem came into office in 1989 with the broad goal of restructuring the economy and reducing both annual fiscal deficits and the external public debt. The public sector was extensive at that time and most public enterprises were money losers. Publicly owned enterprises had historically been one of the primary sources of chronic budget deficits in Argentina. In the 1980s, the national government owned the 17 companies that produced minerals, petroleum, natural gas, and refined fuels, as well as those that were involved in the provision of public utility services, including telecommunications. The government also owned approximately 40 military-related enterprises, which ranged from weapons to timber, petrochemicals, strategic minerals, and construction. It also owned 100 smaller enterprises, including radio and television stations, hotels, and several airlines; and owned and operated the national railroad, which included freight and passenger services. Privatization was an important part of the broader goal of restructuring the economy, but it also enabled the government to reduce what had become an unmanageable level of external public debt. The government used the sale of state enterprises to generate cash as well as to conduct what are called debt-equity swaps. In a debt-equity swap, bank debt is replaced with an equity investment. For example, stock in an entity that is being privatized is exchanged for external public debt owed to a foreign creditor bank. This type of transaction enabled the government to retire its external debt directly. Based on our calculations, the cumulative proceeds from privatization from 1990 through 1994, including cash and debt reduction, equaled approximately 9 percent of Argentina’s economy, or average annual gross domestic product (GDP), during this period. This exceeded the level of cumulative proceeds realized by Mexico from 1989 through 1992, which was 6.3 percent of Mexico’s average annual GDP. However, New Zealand remains the country in our study with the highest level of cumulative sales proceeds as a percent of average annual GDP—at 14.1 percent from 1987 through 1991. Table 1 provides additional comparative detail on all of the countries in our study. We obtained our information on the privatization process in Argentina through interviews with government officials directly involved with privatization in Argentina, and through the use of academic and economic literature and official government material. We conducted this work in Washington, D.C., from January through March 1996 in accordance with generally accepted government auditing standards. World Bank experts on privatization and a privatization expert in Argentina reviewed this document, and we have incorporated their comments where appropriate. We did not verify the accuracy of all of the information provided to us nor did we evaluate the relative success of the privatization program in achieving national goals. Four of the five countries we studied in our earlier report have parliamentary systems of government, but Argentina, like the United States, has a presidential system, with an executive branch, a judiciary, and a bicameral legislature. In Argentina, the executive branch had primary control over the privatization process, while the congress maintained an oversight role. Two laws were passed in 1989 which facilitated privatization: the State Reform Law and the Emergency Law. According to the World Bank, the State Reform Law gave the executive branch sweeping powers to reform the state. The State Reform Law established objectives and procedures for privatization, and the Emergency Law suspended subsidies and removed barriers to foreign investment. We were told that the State Reform Law specified which enterprises were subject to privatization: Additional privatizations required congressional approval. The State Reform Law also created a bicameral legislative oversight commission on reform and privatization, which was composed of members from the majority and opposition parties. The Argentine privatization process was less centralized and more flexible than in the other countries we studied. Separate unique privatization committees were created for each privatization, and the planning and implementation of the privatizations occurred primarily within the committees. A subsecretariat for privatization was formed within the Ministry of the Economy and Public Works and Services several years after the Menem privatization initiatives began, but an expert on privatization in Argentina stated that the unit was created primarily to gather and disseminate information about privatization and to keep foreign investors informed about the status of the privatization initiatives. Most of the state-owned companies in Argentina were located within the Ministry of the Economy and Public Works and Services or the Ministry of Defense, and the Ministers of these units were responsible for appointing the members of the committees within their respective ministries. The committees generally included representatives of the entity being privatized and staff from within either the Ministry of the Economy and Public Works and Services or the Ministry of Defense. The work of the committees was reviewed by the office of the auditor general, and the committees relied extensively on the expertise of consultants, private sector industry experts, and legal advisors to assist them with the sale preparations and transactions. The Argentine government implemented its privatization program quickly—in 3 years, it privatized almost all of its state-owned enterprises. It began with large, complex entities, such as the telecommunications company and the state airline. We were told that the less rigid structure of the privatization process in Argentina facilitated this speed. The Menem government used the successful completion of privatizations to develop credibility for its far reaching program of economic change. The World Bank has reported that from 1990 through 1993, Argentina sold 34 enterprises and awarded concessions for 19 services. In Argentina, the government was required to estimate the worth of an entity prior to sale as well as determine what level of improvements and investment should be required from the purchaser once it acquired the entity. The government used this information to establish a minimum bid. In most cases, the purchasers of all newly privatized firms were also required to invest a certain amount in the entity in addition to the purchase price, and each sale had to include specifications related to investment and improvements. We were told by a privatization expert in Argentina that the government used a variety of valuation techniques, including, in some cases, net present value analysis. We were also told that the government used a market based discount rate for calculating the net present value of the entity. The government generally retained the entities’ liabilities, including debt, but did not attempt to improve the entities’ efficiency in advance of their sale. The market price of an entity is reduced by the liabilities that come with it; the price may be reduced further by the risk premium associated with any contingencies. The Argentine government absorbed most of the known liabilities but let the market make decisions regarding the future efficiency of the firm. We were told by government officials that entities in poor condition offered the private sector an opportunity for improvement and profit, similar to “fixer-uppers,” where profits awaited those who could achieve efficiency improvements. Argentine government officials stated that the efficiency of the privatized firms has significantly improved. For example, we were told that freight productivity has increased and a greater annual volume is now shipped with fewer employees. According to a former government official, telephone lines of the former state telecommunications company have increased and waiting periods for repairs have decreased. The government generally broke up state monopolies and sold the components separately in order to promote competition. Public enterprise assets, such as telephone networks, gas transmission systems, and electricity generation plants, were either sold or awarded through concessions to private sector bidders. The new owners were then required to create private sector corporations to control the assets of the privatized entities. In our earlier study, the countries we examined generally either privatized entities that were already in a corporate form or converted agencies into a corporate form prior to privatization. Sometimes they did this in order to increase the efficiency of the entity and help establish a track record for the entity as a commercial enterprise. In other cases, the governments used this as an opportunity to clean up the entity’s outstanding obligations prior to sale and thus facilitate the sale process. In Argentina, incorporation did not involve an operational restructuring of the entity; rather, it was a legal proceeding to allow the new owner to acquire the assets of the former government enterprise. Public sector employment was reduced significantly as part of the privatization process, but the government also provided generous severance packages, and a World Bank study and government officials have reported that many of the separations were voluntary. The Argentine government reported that, from 1990 through 1994, the number of employees working for public enterprises was reduced from about 348,000 to about 67,000, an 81 percent drop. Of this reduction, 40.8 percent was reportedly due to voluntary or compulsory separation, 41.5 percent to transfers to other levels of government or private firms, and 17.7 percent to normal attrition. Even though public sector employment was significantly reduced, World Bank reports indicate that the Argentine government met with limited resistance from labor during this period of restructuring. The World Bank stated that factors such as low public sector wages, the large number of employees holding more than one job, and generous severance benefits, may explain this limited opposition. The Argentine government privatized public enterprises primarily through divestiture and the awarding of concessions. A concession, or franchise, provides a private sector company with the exclusive right to provide services in a geographic area. A key issue in the Argentina privatization process was whether to sell or to award a concession. A privatization expert told us that there were no explicit criteria for awarding a concession as opposed to selling an entity but that there were implicit criteria. If an asset was considered strategically important to the nation, the government would not sell it. This has often meant that natural monopolies, or entities that have a strongly monopolistic infrastructure, have not been sold. The government awarded concession rights in the following areas: freight and passenger rail, ports, tollroads, water supply, and sanitation services. In preparation to offer concessions for the railroads, the government separated rail into three components: freight, intercity passenger rail, and urban passenger rail, which included the Buenos Aires Metro. Intercity passenger services were then either transferred to provincial governments or closed. The government awarded 10-year concessions (20 years for the Buenos Aires Metro) for the urban passenger lines and 30-year concessions for freight services. The terms of the passenger concession agreement defined the tariffs to be charged, service levels and quality to be provided, and the capital improvements to be carried out. The winning bids were chosen based on the minimum cost to the government for the combined operating support and capital program costs. By contrast, freight concessions were awarded to the highest bidder, including an allowance for proposed capital investment and the number of existing employees to be hired by the concessionaire. Most sales involved open, competitive bidding, for the controlling interest in the entity. The government generally retained a noncontrolling portion of the shares, typically about 39 percent, to be sold later in a public offering. It did this to ensure that it would share the benefits if the price of the entity’s stock rose once the entity was established in the private sector. This procedure has similarities to the use of the “clawback” in New Zealand and the United Kingdom. (Clawbacks are stipulations, that under certain conditions, require the buyers to return a share of profits—or losses—to the government.) The government also retained a portion of the shares for purchase by the employees that were transferring from the public enterprise to the new private entity. The employee share was generally close to 10 percent, although some privatizations reserved as little as 2.5 percent for employees. Worker-shareholders also had the right to elect a representative to the company’s board of directors. The number of shares that each employee could purchase was determined by factors such as the employee’s years of employment and salary level. Upon retirement, death, or employment termination, an employee’s shares were sold back to the company. There are few restrictions on foreign investors in Argentina. According to the Organization for Economic Cooperation and Development (OECD), foreign investors have full access to the local capital market. The World Bank and the OECD also have reported that there is a concentration of asset ownership in Argentina and that most of the public enterprises were sold to financial consortia, which were composed of several Argentine companies allied with international groups. Although the Argentine government generally tried to foster competition through the privatization process, it has experienced some problems promoting competition. One example of a problematic privatization involved the sale of Aerolineas Argentinas, the state-owned airline. When the airline was offered for sale in 1990, the only qualified bidder was a consortium that included the only other airline in the country. According to the World Bank, instead of disallowing the bid, the government allowed the sale to occur. Service was poor and losses continued, and in 1993, the government bought back approximately 30 percent of the airline’s shares. As a result of this sale, the government now makes a greater effort to ensure that there is more than one bidder and that a regulatory framework is in place prior to the sale. The government ultimately sold the shares of Aerolineas Argentinas back to the private sector. The government has had difficulty establishing a regulatory regime, as illustrated by the privatization of the former state telecommunications company, the first company to be privatized in Argentina. In some instances, the government preserved the monopolistic structure of the entity being sold to facilitate the attraction of private capital. A 1995 World Bank report stated that the government in Argentina split the telecommunications market into two regional monopolies to increase the competitiveness of the industry, but we were told that the government also used the monopoly rights to increase the proceeds from the sale. Although a regulatory agency had been established to monitor the telecommunications industry, the government did not, according to the World Bank, develop clear regulatory processes prior to the sale. The government subsequently brought the regulatory agency under closer scrutiny and formed a plan for improving its regulatory framework. There have been improvements in the agency’s performance, but a 1993 World Bank report stated that the regulatory capacities in Argentina may take many years to develop. We were told by government officials, however, now that the government has experience with both regulated monopolies and with competition, that the government strongly prefers the latter. The speed and variable manner in which Argentina privatized may help to explain why the country’s regulatory capacities are not more developed. A privatization expert told us that Argentina’s decentralized privatization process allowed the government to privatize quickly and to formulate solutions for problems as they arose. While this speed and lack of a rigid structure may have had a positive effect on the ability of the government to sell enterprises and award concessions, we were told that these factors may have had a negative effect on the government’s ability to create an adequate regulatory system within a relevant time frame. We were told by a government official in Argentina that the government is required to use the proceeds from privatization to finance the social security system or to buy down existing debt. According to OECD, by the end of 1992, debt-equity swaps enabled the government to retire over $11 billion in external public debt, which represented approximately 5 percent of GDP in 1992. According to the World Bank, the government also received about $8.5 billion in cash during this period. Although it is difficult to determine the amount of net proceeds that Argentina realized from its privatization program, the World Bank and OECD have stated that increased tax revenues from the new corporations, as well as the savings from the discontinuation of subsidies to money losing enterprises, were more important to the economy than the privatization proceeds. In our previous report, we noted that in the United States—as in other nations—divestiture raises the issues of how best to evaluate a proposal to sell, who should manage the valuation and sale processes, how to estimate future proceeds, how the sale should be structured, and how the proceeds should be treated in the budget. Although the experiences in the governments we examined suggested that often no single answer is widely applicable to all governments in all situations, we found that the information these governments provided may help the United States smooth the transfer of viable operations from the public to the private sector. With respect to the privatization process, we noted that a centralized approach was common and offered a number of advantages. We suggested that the Congress assign responsibility for all divestitures to a central agency in the United States as a means of developing a consistent management process. With respect to treatment of the proceeds in the budget, we found widespread use of budget rules designed to prevent the use of one-time proceeds to finance ongoing spending. We also said that budget rules should not dominate the divestiture decision; the decision to privatize should be made on other grounds. Although the Argentine government had—as did the other governments we studied—certain unique approaches to privatization, it also displayed a number of the common elements we identified in our earlier report. For example, the goals for privatization, which included reducing debt and restructuring the economy, were very important in determining the speed and scope of the privatization program and, like the other governments we studied, the Argentine government generally used the proceeds from privatization to reduce debt and thus interest costs. Unlike the other governments in our earlier report, Argentina did not centralize the privatization process. Instead, the government created separate unique privatization committees for each privatization and allowed the process to remain somewhat flexible. This allowed the government to privatize quickly but may have hindered its ability to establish a regulatory framework at the same pace with which it privatized the state-owned industries. We are sending copies of this report to the President of the Senate, the Speaker of the House of Representatives, and the Chairmen and Ranking Members of the House and Senate Budget Committees. We are also sending copies to the Director of the Congressional Budget Office, the Secretary of the Treasury, and the Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-9142 if you or your staff have any questions. Barbara Bovbjerg, Assistant Director, and Hannah Laufe, Senior Evaluator, were major contributors to this report. Susan J. Irving Associate Director, Budget Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined privatization in Argentina, focusing on how Argentina: (1) values and prepares assets for sale; and (2) uses sales proceeds. GAO noted that: (1) Argentina's privatization process is less centrally controlled than other countries and uses special privatization committees to foster its sales process; (2) the Argentine government generally retains the liabilities and obligations of the entities being privatized in order to enhance their sale price and ensure their sale; (3) the Argentine government believes that the private sector does a better job of investing in and improving enterprises; (4) although the government dissolved some industries to foster more competition, many are still monopolies and require some regulatory framework; (5) when the Argentine government values the assets of an entity, it determines the level of improvement and investment that the purchaser will need to make; and (6) the proceeds from Argentina's privatization are used to finance its social security system and existing debt. |
The RLA was passed in 1926, in part to establish a legal framework for avoiding disruptions in rail service and interstate commerce. The RLA was later amended to extend its provisions to the airline industry, and also to establish NMB as a federal agency to administer the law. According to NMB, the original bill was drafted jointly by rail management and labor representatives and was enacted by Congress without amendment. The RLA establishes several key principles. For example: It requires parties (rail and air carriers and their employees’ representatives) to “exert every reasonable effort” to settle disputes to avoid interruption to commerce or to the operation of any railroad or airline. It gives participants the right to designate their representatives under the Act without interference from the other party and assures employees the right to determine a collective bargaining representative without interference from their employers. It establishes procedures for resolving disputes over pay, rules, or working conditions during collective bargaining, as well as disputes resulting from the interpretation or application of existing collective bargaining agreements. NMB is headed by a three-member board, with each member appointed by the President and confirmed by the Senate for a term of 3 years. No more than two members of the board can be from the same political party. In August 2013, a third board member was confirmed by the Senate, filling a board position that had been vacant since June 2012. The board members typically designate a chairman annually. The board members provide overall leadership and strategic direction for NMB, and retain responsibility for key functions such as releasing the parties from the mediation of major disputes if no agreement can be reached. The board has delegated day-to-day administration and oversight to NMB’s Chief of Staff and General Counsel (see fig. 1). As of August 2013, NMB had 49 employees, including the 3 board members. NMB also contracts with approximately 430 arbitrators. In fiscal year 2013, NMB had an operating budget of $12.7 million. NMB has sole jurisdiction to certify employee unions in the rail and air industries. Under the RLA, employees have a right to select a union free from influence, interference, or coercion from their employer. Eligible employees in a craft or class at a given carrier—those who perform the same duties and functions, such as locomotive engineers or pilots— select a union representative on a systemwide basis. For example, the pilots at an airline must be represented by the same union regardless of where they are located geographically. Unions are selected through secret-ballot elections conducted by NMB. If there is a question concerning representation of a craft or class, NMB is charged with resolving the representation dispute through its Office of Legal Affairs. For an election to occur, at least 50 percent of the eligible employees in the craft or class must submit signed authorization cards indicating their interest in being represented by a union. An NMB investigator compares signatures on those cards to signature samples for all eligible voters in the craft or class provided by the carrier to assess their authenticity and determine if there is a sufficient percentage to hold an election. The investigator also addresses any challenges to the eligibility of individual voters. Votes are cast by phone or via the Internet, in an election overseen by NMB. If a union receives a majority of the votes, NMB certifies the union as the employee representative. Additionally, NMB protects against “interference, influence, or coercion by either party” during this process, and it can order a new election or shorten the time before another union may apply to represent the employees if it determines any of these have occurred. In 2010, NMB changed its rules for certifying a union. Previously, a union had to receive votes from a majority of all employees in a craft or class who were eligible to vote in order to be certified, or approved, as the employee representative. This meant that employees who did not vote were, in effect, counted as voting against union representation. Under the rule change, an employee who chooses not to vote is no longer counted because union representation is now determined by a majority of votes cast. Numerous stakeholders, including union and carrier groups and members of Congress, submitted comments for and against the change. In addition, one NMB board member at the time wrote a dissenting opinion in the proposed and final rules published in the Federal Register. Once a union has been certified to represent a group of employees, the carrier is required to negotiate with—and only with—that union in a process known as collective bargaining. When the parties cannot reach agreement on the terms of a new or revised collective bargaining agreement—such as working conditions or rates of pay—it is classified as a “major dispute.” Either party can apply for NMB’s mediation services to resolve their differences or NMB may impose mediation if it finds that resolving the dispute is in the public’s interest. In general, mediation is a process through which disputing parties, with assistance from a neutral third party (known as a mediator), seek ways to settle their dispute. In fiscal year 2012, NMB provided mediation services in 144 cases, 46 of which were closed and 98 of which were pending at the end of that year. Mediation continues until the parties reach agreement or NMB determines that further mediation will not be effective and offers the parties the option of interest arbitration, in which case a neutral arbitrator would determine what provisions to include in the new or modified collective bargaining agreement. Either party may, however, refuse arbitration. If either party refuses, a 30-day cooling off period is triggered before the parties can exercise what is known as self help. Self help includes actions such as the union going on strike or the carrier denying employment or refusing to admit union employees onto the property in a lockout. To prevent this work stoppage, the President may create a Presidential Emergency Board to help settle the dispute. In fiscal year 2012, there was one Presidential Emergency Board created in a dispute involving the five largest U.S. railroads and numerous short-line and regional railroads. (See fig. 2 for a description of the process for resolving major disputes under the RLA.) In addition to mediation and arbitration, NMB provides voluntary alternative dispute resolution (ADR) services, such as facilitation and training, to help unions and carriers learn to resolve disputes using less confrontational methods. The RLA also offers another type of arbitration—grievance arbitration—to help resolve “minor disputes.” As opposed to major disputes, which involve the establishment or modification of a collective bargaining agreement, minor disputes are disagreements over how to interpret and apply existing agreements. For example, an employee may file a grievance if he or she believes they were wrongfully fired or disciplined. If the carrier and employee cannot resolve their dispute, the RLA permits a party to refer the dispute to arbitration before an adjustment board created by the rail or air industry. The adjustment board consists of a carrier representative, a union representative, and a neutral arbitrator. In most instances, the neutral is selected by the two representatives. If they are unable to agree on an individual, they may request that NMB appoint a neutral. Grievances that the parties have been unable to resolve themselves are submitted to this board for resolution. In this capacity, the neutral is called upon to break a tie. Unlike major disputes, minor disputes cannot trigger self-help actions such as strikes or lockouts. NMB does not directly provide arbitration services through its own staff, but rather maintains a list of registered arbitrators from which the parties can select someone to review and decide their case. In the airline industry, the parties pay the costs of arbitration. In the railroad industry, however, consistent with the requirements of the RLA, NMB pays the fee (currently $300 per day) and travel expenses of the arbitrator. In fiscal year 2012, NMB’s arbitrators closed 3,869 rail grievance arbitration cases and 2,084 were pending at year’s end. In that year, NMB’s arbitration budget was $2 million, not including NMB staff salaries. NMB differs in several key ways from the other three labor relations agencies in the United States: the National Labor Relations Board (NLRB), Federal Mediation & Conciliation Service (FMCS), and Federal Labor Relations Authority (FLRA) (see table 1). OMB and OPM have key oversight responsibilities for all federal agencies, including NMB. Among other things, OMB is responsible for providing oversight of agencies’ management, including information technology and procurement. OMB is also responsible for preparing and implementing the President’s annual budget and for providing guidance to agencies on how to comply with the GPRA Modernization Act of 2010 (GPRAMA). OPM is the central personnel management agency of the federal government charged with administering and enforcing federal civil service laws, regulations, and rules. OPM also maintains a personnel management oversight program to ensure that agencies comply with merit system principles and standards set by the office. As part of this program, OPM conducts audits of agencies’ human capital programs. In addition, OPM administers a survey of federal employees to measure employees’ perceptions of whether, and to what extent, conditions characterizing successful organizations are present in their agencies. In 1993, Congress passed GPRA, which established strategic planning, performance planning, and performance reporting as a framework for agencies to communicate progress in achieving their missions. GPRAMA, enacted in 2011, established some important changes to existing requirements by placing a heightened emphasis on priority-setting, cross- organizational collaboration to achieve shared goals, and the use and analysis of goals and measurements to improve outcomes. GPRAMA enhanced agency-level planning and reporting requirements and requires agencies to have additional leadership involvement and accountability. NMB lacks a robust strategic planning process, with formal mechanisms for gathering congressional and stakeholder input, and as a result, the agency has not met some federal requirements for performance planning. For example, NMB updated its previous strategic plan, which covered fiscal years 2005 to 2010, in November 2012, 7 years after it was issued. NMB also missed a February 2012 deadline set by GPRAMA for agencies to update their existing strategic plans to be consistent with new requirements. NMB officials said they engage in strategic planning but have not made the development of a strategic planning document a high priority. The agency had delayed the start of the strategic planning process until the completion of changes in the agency’s information technology systems. The sequencing of NMB’s strategic plan development—after the roll-out of other initiatives—runs counter to our previous findings on leading results-oriented organizations. Strategic planning should provide the basis for everything an organization does. For example, all decisions, including those involving resources such as information technology systems and human capital investments, should flow from an agency’s strategic plan. Developing a strategic plan can help clarify organizational priorities and unify an agency’s staff in the pursuit of shared goals. Agencies are also required to issue a new strategic plan in February 2014, and every 4 years thereafter, concurrent with the President’s budget (issued no later than the first Monday in February). (See sidebar for key strategic planning requirements.) In August 2013, NMB officials said they intend to complete a new strategic plan by December 2013, before the February 2014 deadline. NMB lacks a systematic way of involving congressional and other stakeholders in its strategic planning process. As noted earlier, agencies should consult with Congress and other stakeholders for input when developing and adjusting their strategic plans. NMB officials told us they do not have a routine process for gathering input from congressional stakeholders during the strategic planning process, a key focus of GPRAMA. Consultations with Congress are intended, in part, to ensure that agency performance information is useful for congressional oversight and decision making. As we have previously reported, having pertinent and reliable performance information available is necessary for Congress to adequately assess agencies’ progress in making performance and management improvements and ensure accountability for results. In addition, many stakeholders in the rail and air industries, both in labor and management, told us NMB does not seek out and incorporate stakeholder input in its strategic planning efforts, and NMB officials confirmed that they do not have a formal mechanism for involving stakeholders in this process. NMB officials told us that the agency instead receives input from stakeholders in informal ways, such as getting feedback from parties during the course of mediation, or from members of two joint labor-management groups: the Dunlop II Committee, which has reviewed NMB’s mediation function, and the Section 3 Subcommittee, which has reviewed NMB’s rail arbitration function. However, when we interviewed officials from both groups, they reported that NMB hasn’t always involved them in planning. Stakeholder involvement is important to help agencies ensure their efforts and resources are being targeted to the highest priorities. Without a robust strategic planning process as a guide, NMB is also not meeting federal requirements for annual performance planning and reporting. Federal agencies are required to develop annual performance plans. These plans use performance measurement to reinforce the connection between the long-term strategic goals outlined in an agency’s strategic plan and the day-to-day activities of its managers and staff. Annual performance plans are to include performance goals which cover each agency’s program activities as listed in the budget, a description of the necessary resources and strategies to achieve these goals, a balanced set of performance measures for each goal, and a discussion of how the measures will be verified. (See sidebar for key performance planning requirements.) An agency’s performance goals establish desired performance levels, and performance measures are used to assess progress toward achieving those goals. Yet the goals and sub-goals listed in NMB’s fiscal year 2014 budget submission do not consistently meet the basic GPRAMA requirements of being objective, measurable, and quantifiable. For example, NMB states that it has a goal to “better track the history of cases” within its mediation and ADR programs. However, this goal does not contain a target number of mediation cases to be tracked (quantifiable), what would be considered “better” (objective), a performance indicator to gauge progress (measurable), or a time period in which to accomplish this goal (see fig. 3). Without measurable targets or timeframes, these goals do not establish intended performance or allow NMB and the public to assess progress. Since NMB has not developed objective, measurable, or quantifiable goals, it is not well positioned to develop performance measures to use in reporting progress or results in any program area. NMB’s fiscal year 2011-2016 strategic plan stated that the agency plans to formulate performance measures for its strategic goals and each of the related performance goals in its annual budget submission. However, the performance goals in NMB’s fiscal year 2014 budget submission did not contain indicators or measures that could be used to gauge agency performance. NMB officials said they are currently working to develop more quantifiable, time-based performance measures for NMB program areas in its next strategic plan. Our previous work on results-oriented organizations shows that many agencies need years to develop a sound set of performance measures. NMB officials told us they internally track measurable targets, such as the number of days NMB mediators meet with parties before reaching resolution on a specific collective bargaining agreement, but the agency does not report these metrics publicly. Without objective, measurable, and quantifiable performance goals made available to the public, Congress and stakeholders lack information on the extent to which NMB is making annual progress toward its strategic goals and how NMB is planning to use its resources. NMB is following most key practices for financial accountability and control, an integral part of an organization’s management that should reach throughout all departments and programs as well as financial management and reporting functions. There are two key practices, however, that NMB is following only partially or minimally (see table 2). NMB routinely prepares comprehensive, agencywide financial statements and contracts for independent audits of those financial statements. The agency has reported that it received 15 years of unqualified financial audit opinions. As part of these audits, the auditors also provide NMB with a report on internal controls over financial reporting. In those reports, the auditors identified a material weakness in two of the years audited (2010 and 2011). The material weakness in both years was related to the untimely recording of obligations related to NMB’s arbitration services. This issue was downgraded to a significant deficiency in fiscal year 2012 after NMB took steps to address the issue. According to a senior NMB official, the agency expects to address all recommendations related to the 2012 significant deficiency with full implementation of its Arbitrator Workspace, a web-based information system that replaces multiple electronic forms used by NMB to track arbitrator activity and financial obligations to contracted arbitrators. In addition to annual financial statement audits, NMB contracts with the same independent auditors to review its internal controls over one key program area (mediation, arbitration, representation, and ADR) or management area (procurement and personnel/payroll) annually. The law commonly known as the Federal Managers’ Financial Integrity Act of 1982 and its implementing guidance from OMB require federal agencies to develop and implement appropriate and cost-effective internal controls for results-oriented management, assess the adequacy of those internal controls, identify needed areas of improvement, take corresponding corrective action, and provide an annual statement of assurance regarding internal controls and financial systems. However, NMB does not have a mechanism for ensuring prompt resolution of findings, a key internal control. When asked how the agency monitors the status of recommendations or findings in its internal control reviews, NMB officials said it is assumed, based on management’s written response in the reports, that any weaknesses will be addressed and resolved by the date of the next review of that topic. However, that review may not occur for as many as 8 years. For example, the representation function was last reviewed in fiscal year 2005, and auditors began their next review of that topic in June 2013. In addition, NMB has not consistently provided management responses, which are written descriptions of the actions the agency plans to take to address deficiencies. For example, in the 2009-2010 report on internal controls over arbitration, NMB did not submit a management response to the five findings and recommendations in the report. In addition, NMB did not submit responses to findings and recommendations in the auditors’ management letters that accompanied NMB’s financial statement reports in fiscal years 2011 and 2012. It is also not clear that top NMB management officials are monitoring the resolution of findings and recommendations made by independent auditors. Within NMB, department heads are responsible for implementing the auditors’ recommendations. The Chief of Staff and the board do not assume operational responsibility for addressing the findings of independent auditors. In addition, it is not clear that NMB’s board members are given routine, formal reports on the status of findings or how they are resolved. While some NMB officials told us board members are briefed or given reports on the status of resolving findings, officials could not provide documentation, such as minutes of meetings in which audit findings or internal control reviews and their resolution were discussed. Officials told us that it is the agency’s policy not to keep minutes of board meetings. Furthermore, NMB’s auditors do not follow up on the status of recommendations in the internal control reviews until the date of the next review unless the recommendation is related to NMB’s annual financial statement audit. As a result, some recommendations made by auditors to improve internal controls or NMB operations may not be addressed. In the 2008-2009 report on NMB’s personnel and payroll, auditors found the same deficiency they identified in their fiscal year 2003 review on the same topic. In addition, auditors made the same recommendation to NMB in their management letters for fiscal years 2008, 2010, 2011, and 2012. NMB relies on information technology to carry out its mission. The agency uses computer systems and networks to host applications, files, email services, and web access, and to manage its records and financial information. During fiscal year 2013, NMB switched its network infrastructure, including its email, files, web access, and other applications, to a commercial vendor. Further, in May 2013, it switched its financial management system to the Bureau of the Public Debt’s Administrative Resource Center. NMB also plans to bring its records management system into its cloud computing environment in fiscal year 2014. Federal laws and policies require federal agencies to implement key practices to effectively manage and secure their information systems and information, and protect the privacy of personal information they collect and use. For example, the Federal Information Security Management Act of 2002 (FISMA) requires federal agencies to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those operated on behalf of the agency by a contractor or another agency. Such a program includes eight key practices. To improve information security and reduce overall information technology operating costs for agencies, OMB also issued a memorandum instructing the heads of departments and agencies to implement common security settings for computers running Windows operating systems. In addition, the Privacy Act of 1974 and E- Government Act of 2002 describe, among other things, agency responsibilities with regard to protecting personally identifiable information. NMB has not fully implemented key information security and privacy practices (see table 3). According to agency officials, key practices were not followed because NMB has been in the process of transitioning its information technology network and systems. NMB plans to update and finalize all of its information security policies and procedures to reflect its new information technology environment. It also plans to conduct a test and evaluation of its information security controls to ensure that they are appropriately designed and operating effectively. However, the agency has not established timeframes or milestones for completing these actions. Until NMB fully develops, documents, and implements an agencywide information security program, increased risk exists that the confidentiality, integrity, and availability of its information will be compromised. Finally, although NMB officials stated that the agency has taken steps to provide privacy protections for personal information accessed or held by NMB, they did not provide supporting documents to demonstrate how the agency plans to ensure the privacy of personal information that may be contained in its new computer environments. Until it establishes privacy policies, NMB will have limited assurance that the personal information it collects and uses is adequately protected. NMB has taken steps to improve its human capital program, such as increasing oversight of mediators and providing additional training opportunities to its staff. For example, in 1997, in response to recommendations made by the rail and air labor-management committees, NMB relocated all its mediators from the field to its headquarters in Washington, D.C. In 2000, NMB established individual development plans to improve staff training, and, in 2010, began providing free or low cost training to its staff through partnerships with two universities and other efforts. More specifically, NMB officials teach classes at those universities, or the agency provides office space for the universities to hold training sessions for labor relations practitioners in exchange for several NMB staff attending the training sessions at no charge. Despite these improvements, NMB’s actions are not guided by a strategic workforce plan, which is essential for every federal agency in that it addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals; and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. Workforce planning is a key internal control, as agency management should ensure that skill needs are continually assessed and that the organization is able to obtain and maintain a workforce with the skills necessary to achieve organizational goals. Although agencies may take a variety of approaches to workforce planning, GAO has identified some principles that agencies should follow, including determining critical skills and competencies, and developing strategies for closing any gaps. NMB has not developed a strategic workforce plan that incorporates these key practices (see table 4). Although the agency indicated in 2009 that it would develop one as part of its last human capital plan, it has yet to do so. NMB’s lack of strategic workforce planning was also identified by OPM in its last evaluation of the agency’s human capital program in 2007. NMB has not determined the skills and competencies it needs to achieve current and future program results, nor has it systematically identified gaps or strategies to fill them. Instead, senior NMB officials said they identify needed skills and competencies on a case-by-case basis, such as when hiring to fill a specific vacancy. Officials added that NMB does not have difficulty hiring staff. However, some stakeholders identified, and NMB officials acknowledged, possible gaps in skills and competencies among NMB mediators, an occupation NMB has identified as critical to its mission. For example, several NMB officials and most of the labor and management stakeholders we interviewed said NMB staff, particularly mediators, lack rail industry experience. NMB officials confirmed that 10 of its 13 mediators come from the airline industry, 2 come from the railroad industry, and 1 does not have a background in either industry. NMB officials said it is difficult to recruit mediators from the rail industry because there are few rail employees willing to leave their railroad jobs, particularly because industry salaries and benefits are greater than what NMB can offer. NMB officials described a few strategies they have used to recruit more staff with rail experience through networking at industry conferences, but they do not have a formal recruiting strategy. In addition, NMB officials said they cross-train staff in both the rail and air industries and, in 2013, took mediators on a tour of a rail property to learn more about the rail industry. Similarly, an NMB official and some carrier officials we interviewed said most NMB mediators come from labor union, rather than management, backgrounds. NMB officials confirmed that 8 of its 13 mediators formerly held union labor relations positions while 4 held labor relations positions with carriers. (One mediator did not have a labor relations background with either a union or carrier.) NMB officials told us the agency values professional experience in rail or air labor relations over mediation experience because they have found that mediators need to understand the industries in order to be effective. However, two airline management officials said that having more mediators with union backgrounds than with management backgrounds could create the perception that the agency is not completely neutral in collective bargaining. Further, NMB has not employed strategies, such as succession planning or leadership development, to help close gaps in critical skills and competencies. Succession planning and leadership development were also identified as problems by OPM in its 2007 evaluation of NMB. NMB officials identified succession planning as the agency’s greatest human capital challenge: All senior managers are eligible to retire and all of the mediators are in their second careers and eligible to retire. However, NMB has not revised its succession plan since it was first drafted in 2006, even though it has since made organizational changes, such as reinstituting the Chief of Staff position in 2010. The Director of the Office of Administration said NMB’s small size and limited budget make it difficult to hire additional staff to succeed retirement-eligible senior staff. The director also told us the agency is taking some steps, such as establishing agreements with other agencies to provide administrative services and free up NMB staff time for developmental opportunities. Training also can be an important strategy to address gaps in skills or competencies, and NMB has taken steps to improve its training program. In July 2013, NMB revised its training policy, which establishes a process through which the agency identifies training priorities and needs. Specifically, the policy states that training for new staff or to correct performance deficiencies takes priority over training to improve satisfactory performance or to prepare employees for career advancement. Each department prepares an annual training plan that identifies training needs, in line with agency training priorities. Then each employee and his or her supervisor develop an individual development plan that identifies specific training opportunities to meet that employee’s needs. These individual development plans are approved by the department director. Employees are required to fill out evaluations after completing courses. Officials said training opportunities usually consist of industry conferences, on-the-job training, or training developed through partnerships with individual carriers, unions, or universities. However, NMB has not established minimum training requirements for all staff beyond training required of all government employees, such as information security training, or training required to maintain professional certifications, such as continuing legal education courses. OPM made several recommendations to NMB regarding training in its 2007 evaluation. According to OPM’s fiscal year 2012 Federal Employee Viewpoint Survey, 85 percent of NMB staff said the agency’s workforce has the job-relevant knowledge and skills necessary to accomplish organizational goals, but only 50 percent were satisfied with the training they received for their present job. Thirty-six percent of NMB employees were unsatisfied, compared with 24 percent at all small agencies combined. Although NMB officials said that mediators arrive at the agency with significant industry experience, stakeholders identified a need for improved training. Most of the stakeholders we interviewed said that the quality of NMB mediators varies. Most said mediators need to continually update their training to stay abreast of industry trends and effective mediation techniques. Similarly, the April 2010 report by a joint rail and air labor-management committee, the Dunlop II Committee, found that the level of initial and continuing training for mediators was inadequate, and that mediators would benefit from more standardized, comprehensive, and regular training, particularly in mediation skills. The committee noted that most mediators were formerly advocates, not neutral parties, and needed training in mediation skills. NMB has taken some steps to address concerns about mediator training. For example, two stakeholders we interviewed said mediators could use training to interpret and evaluate cost estimates provided by the parties as part of collective bargaining, and in July 2013, NMB officials told us that they brought in two experts to provide training on evaluating cost estimates. In addition, NMB officials said that the agency is developing a core curriculum for new mediators consisting of two courses on mediation and negotiation skills. The first 5- day course, on mediation, is now available, and officials said the negotiation skills course will be available starting in spring 2014. Finally, because NMB has not established a strategic workforce plan, the agency has not monitored and evaluated the results of its workforce planning efforts, including whether they contribute to accomplishing the agency’s strategic goals. While NMB set human capital goals and objectives in its 2009 human capital plan, it has not tracked or evaluated its progress in meeting them and has not taken many of the actions detailed in the plan. For example, under a goal to “guide human capital decisions by using a data-driven, results-oriented planning and accountability system,” NMB had a strategic objective to “develop an NMB accountability program based on OPM’s accountability system.” However, as of April 2013—4 years later—a senior official reported that NMB has not developed such an accountability system, citing other pressing priorities. In its 2007 evaluation, OPM also noted that NMB did not have an accountability system that met OPM requirements. NMB is partially following key practices we have identified as critical for agencies to manage their procurement functions, but we found some shortcomings (see table 5). NMB is following a key practice of appropriately placing the procurement function within the organization: The procurement function is at an equal organizational level as other key mission offices, such as the Offices of Arbitration and Mediation, the primary internal customers for whom goods and services are acquired. Further, as Director of the Office of Administration, the senior procurement official also oversees the financial and information technology functions, thereby providing direct insight into other key internal functions involved in the procurement process. The placement of the office facilitates proper management support and visibility within the organization to help the agency meet its overall mission and needs—key principles that we previously identified. In addition, NMB involves its internal stakeholders in the procurement process. For example, NMB’s procurement procedures call for a multidisciplinary approach in that relevant NMB departments and subject matter experts are to be involved in making purchase requests, identifying requirements, and reviewing vendors’ proposals. While NMB has established policies and procedures to oversee the procurement process, we and others have identified some weaknesses in following procedures and maintaining appropriate documentation. NMB’s independent auditors reviewed the internal controls over NMB’s procurement activities in fiscal year 2011 and identified some weaknesses. For example, auditors recommended that NMB improve its process for organizing and maintaining records because it found that NMB was unable to provide certain documents for about 25 percent of its purchases under $3,000. NMB officials said they have made changes to the agency’s processes to maintain greater control over documents that support its procurement decisions. We found that NMB did not follow its own policies and procedures in purchasing laptops and iPads. NMB spent about $110,000 to purchase 55 laptop computers for its staff in September 2012. Before making the purchase, NMB solicited quotations from three vendors and received quotations from two of the vendors for the laptops. However, NMB chose a quotation representing different technical specifications than the specifications originally provided to vendors. When we asked for the business case for purchasing these laptops, an NMB official drafted a memo, dated March 2013, stating that the agency chose the more expensive laptops because of features such as durability, touch-screen capability, and a design that switches easily between laptop and tablet. This official further stated that the laptops NMB did not select were largely viewed as inferior to the laptops NMB selected. However, when requesting quotations, NMB did not include these criteria—touch-screen capability and a convertible laptop and tablet design—in its technical specifications. Rather, the specifications it provided to vendors were that of a more traditional laptop. The vendor whose quotation NMB selected submitted two quotations at NMB’s request—one meeting NMB’s original specifications and one for the convertible laptop and tablet NMB ultimately selected. There was no documentation that NMB notified the other vendors of its revised specifications and provided these vendors with an opportunity to submit additional quotations. NMB also purchased iPads in October 2012 for half its staff over the written objection of the individual who serves as Chief Financial Officer, senior procurement official, and Chief Information Officer. However, according to federal information technology management policy and internal NMB documents, the Chief Information Officer should have direct authority over developing and implementing information technology policy, including information technology investments. In a memo, this official said the purchase of the iPads, in addition to new laptops that also function as tablet computers, and new smartphones, was not in keeping with a November 2011 Executive Order to limit the number of devices provided to employees. Instead of NMB purchasing the iPads, this official recommended that employees make use of NMB’s policy allowing employees to use their own information technology devices. In total, NMB spent more than $130,000 to purchase 25 iPads and 55 laptops, and to upgrade 22 smartphones for agency staff. The devices have many common features, calling into question the necessity for having all three (see fig. 4). We calculated that, as of July 2013, NMB had issued all three devices to 19 staff members, including the 2 board members—about 40 percent of NMB’s workforce. NMB also did not follow its own procurement procedures in purchasing the iPads. Based on the documents NMB provided us, the agency did not follow procedures to identify the requirements before purchasing the iPads, as required. There was no statement of requirements prepared, in accordance with NMB procedures, and a senior NMB official told us the purchase was made without outlining specific requirements or assessing whether the iPad, or a tablet device by another manufacturer, would meet those requirements. After we requested documentation of the agency’s business case, the Chief of Staff, who approved the purchase request for the iPads submitted by a board member in September 2012, drafted a memo dated March 2013 stating that the agency’s move to a cloud computing environment and the functionality of the iPads, in addition to their light weight and relatively low cost, makes them ideal for use at the bargaining table and in training. The memo also stated that, prior to the purchase, many mediators were using their own iPads. In addition, NMB did not seek competition in making the iPad purchase. NMB’s procurement procedures call for full and open competition if the agency does not use the General Services Administration Federal Supply Schedules to select vendors in making purchases. However, NMB did not take steps to consult the General Services Administration schedule or use full and open competition. NMB officials explained that they initially wanted to use the government’s primary telecommunications provider to obtain these devices but they learned that they could not purchase the devices without also purchasing the data plan offered by that provider. Officials then contacted the iPad’s manufacturer, who referred them to a federal reseller but did not provide a list of additional resellers. NMB also did not maintain documentation for this purchase that explained the absence of competition, as called for in the Federal Acquisition Regulation for procurements under $150,000. At the time NMB purchased the laptops, iPads, and smartphones, it lacked a mechanism for ensuring that its procurement procedures were consistently adhered to, such as a checklist for taking required steps in the procurement process or maintaining proper documentation. Internal control standards specify that agencies should implement mechanisms that enforce management’s directives. This would include, for example, the process of adhering to procedures and requirements for procuring goods and services. Such actions are integral to ensuring effective stewardship of government resources and obtaining the best possible value for the government. In July 2013, NMB officials told us they had recently created such a checklist, as part of its transition to a new procurement system. This checklist includes required steps, such as conducting market research and checking the General Services Administration schedule for available equipment and vendors, and requires staff to include documentation of such steps in the contract file. The checklist also requires staff to list three vendors and, if the vendor with the lowest bid is not selected, provide a rationale and additional documentation on the decisions made. Although our review focused on internal controls in specific management areas at NMB, we identified several broader management issues as well. OMB and OPM provide limited budgetary and human capital oversight of NMB. OMB reviews NMB’s budget request and strategic plans, among other documents, but it has not focused on NMB management issues. According to OMB officials, OMB does not provide the same level of oversight for organizations with small budgets and staff as it does for larger organizations. The same is true for OPM. Until 2012, OPM annually reviewed and provided feedback on NMB’s human capital management report. However, beginning in 2013, OPM officials told us that OPM no longer requires small agencies to submit their human capital management reports for review. OPM last audited NMB’s human capital program in 2007. OPM followed up with NMB and, as of 2008, considered that evaluation closed. It has no plans for future evaluations. However, OPM surveys NMB’s employees as part of its Federal Employee Viewpoint Survey of federal agencies, which captures NMB employees’ views of the agency’s strengths and challenges. Although NMB contracts for annual financial statement audits and rotating internal control reviews of program and management areas, the agency does not have continual, internal mechanisms in place to monitor and review its operations and programs, such as an audit committee or internal audit function as recommended by internal control standards. In addition, top NMB management officials do not take operational responsibility for addressing the findings of the independent auditors. As a result, problems identified in those auditors’ reviews of program and management areas may not be revisited for 5 or more years, when that topic again comes up for review. NMB does not have a statutory Inspector General (IG) to fill this oversight role. In comparison, two other federal agencies with similar missions, the National Labor Relations Board (NLRB) and the Federal Labor Relations Authority (FLRA), have statutory IGs. NMB’s independent auditors said they can only perform the reviews NMB hires them to do. In contrast, an IG can target any area of an agency’s operations for an audit, inspection, or investigation. The House version of the FAA Modernization and Reform Act of 2012, H.R. 658, contained a provision that would have authorized the IG of the Department of Transportation to review NMB operations, issue findings and recommendations to address any identified problems, and keep the chairman of the NMB Board and Congress informed on agency efforts to address them. The bill also would have authorized $125,000 in Department of Transportation appropriations for each of fiscal years 2011 through 2014 to cover these services. However, these provisions were removed from the bill during conference and therefore were not enacted. NMB has long struggled with a large volume of railroad grievance arbitration cases related to minor disputes. At the beginning of fiscal year 2000, there were 11,237 pending arbitration cases. More recently, NMB has taken steps to reduce the number of pending cases. For example, it now performs annual checks to identify cases the parties no longer want heard and remove them from the list of pending work. In addition, NMB has been exploring using alternatives to arbitration to resolve minor disputes, such as grievance mediation. In fiscal years 2008, 2009, and 2010, Congress provided NMB with supplemental appropriations ($657,000, $560,000, and $29,000, respectively) to reduce the arbitration backlog. By the beginning of fiscal year 2013, NMB had reduced the number of pending cases to 2,084. However, rail carriers and unions filed more than 3,500 new cases in fiscal year 2012. While stakeholders told us NMB has done a good job reducing the number of these pending rail cases, several also said they are unable to get an arbitrator assigned in a timely manner. Grievance arbitration cases in the rail industry can involve a wide range of grievances, such as wrongful dismissals, unfair labor practices, and rights to additional pay. For example, we reviewed several arbitration decisions issued in 2012, including a “time claim” case that involved a group of employees requesting a day’s pay because their employer had violated a term of their collective bargaining agreement. Another case involved an employee who believed she was wrongfully suspended because of an alleged violation of the employer’s code of conduct policy. Although NMB has some information from the parties on the types of claims filed, officials said they do not track or analyze these data, nor prioritize cases by type of case or level of urgency. NMB assigns arbitrators to cases in the order the requests are received. Since NMB apportions its annual funding for arbitrators into 12 monthly allocations, once NMB officials have obligated all funds for a month, they will not fulfill requests to assign arbitrators to additional cases. As a result, according to rail union and carrier officials we interviewed, some disputes are not heard in a timely manner. For example, they said cases involving a dispute over whether a rail employee was improperly fired are sometimes delayed because an arbitrator is not assigned immediately. Several former board members and a senior NMB official told us that the large number of rail grievance arbitration cases submitted to NMB occurs because the parties do not bear any of these costs and so lack a financial incentive to only file claims with merit. Unique among labor relations agencies in the federal government, NMB uses federal funding to pay the fees and travel expenses for an arbitrator to resolve minor disputes between a specific employer and a union, in accordance with the requirements of the RLA. In the airline industry, as with all other private sector industries and the federal government, the parties pay for the arbitrator and all other arbitration expenses. In 2004, to encourage more efficient use of its resources and citing its authority under the RLA, NMB issued a notice of proposed rulemaking to, among other things, establish application fees for certain grievance arbitration services, including, for example, a $50 fee for certification of an arbitrator to a board. The preamble to the proposed rule noted that these fees, which represented only a small portion of the actual costs of providing the respective services, would help reduce the large numbers of pending cases by encouraging parties to only file and proceed with those with merit and to consolidate as many grievances as possible. However, NMB did not cite any data on the numbers and types of rail arbitration cases that compose the backlog as justification for its proposal. In January 2005, at an NMB hearing on the proposed rule, an organization representing rail carriers expressed support for fees, stating that the current system imposes few restraints on the pursuit of any grievance, regardless of its merit. The organization said that requiring parties in the rail industry, like those in all other industries, to internalize the costs of arbitration would result in a more effective and efficient system. However, numerous union stakeholders voiced opposition to fees, asserting that NMB is required by the RLA to cover these costs and NMB does not have authority under the Act to charge fees. Union representatives also said that when the parties jointly crafted the RLA, the unions gave up their right to strike over minor grievances in exchange for government-financed arbitration. Among other concerns, opponents said fees would discourage unions and individual employees from pursuing valid arbitration of minor disputes because costs are more difficult for them to bear than they are for carriers. In addition, more than 125 members of Congress signed a letter urging NMB to reconsider the proposed rule. NMB did not issue a final rule regarding fees. Although NMB did not finalize its proposed rule to establish fees, the agency has taken other actions to manage its grievance arbitration workload. In a memo issued June 10, 2013, during the course of our review, NMB’s Chief of Staff discussed month-to-month delays for rail arbitration cases and said cases that are not assigned an arbitrator in a given month for lack of funds no longer have to be re-filed the following month. Instead, those cases will now go to the top of a wait list for the next month. The memorandum stated that this change, assuming the parties do not flood the system with requests, should mean that no case will go more than 2 months without being assigned to an arbitrator. Both NMB officials and stakeholders said the agency has addressed challenges presented by railroad and airline industry changes, such as mergers, in conducting its representation work. Because union representation of rail and air employees is based on their inclusion in a craft or class systemwide and not by worksite, some recent elections overseen by NMB have involved tens of thousands of employees in multiple locations. For example, a 2010 merger of United and Continental airlines prompted an election in 2011 involving 24,000 eligible flight attendants. In comparison, elections at NLRB typically involve a single worksite and substantially smaller numbers of voters. In fiscal year 2012, more than 88,000 rail and air employees were involved in NMB representation elections. NMB officials said mergers and bankruptcies in recent years, primarily in the airline industry, have taxed agency resources. While NMB can set longer timeframes for large elections, officials said they must meet those deadlines with the same number of NMB staff. Senior officials in NMB’s Office of Legal Affairs said they borrow other NMB employees, such as mediators, to assist with large elections. At least one union official expressed concern that when mediators are drafted to assist with representation, their work in mediating collective bargaining agreements can be delayed. While the size of elections may have increased, the number of representation cases handled annually by NMB’s Office of Legal Affairs has remained constant at about 45 cases for each of the last 10 years. NMB has also made changes in its representation work as a result of evolving technology, but some key steps remain labor intensive. Moving to online voting in 2007 was a challenge to implement, NMB officials said, but stakeholders were supportive. NMB instituted online voting after several years of planning. NMB also implemented e-filing for election applications and other required documents in September 2009. However, authorization cards submitted by unions to demonstrate sufficient interest to hold an election are still authenticated by NMB staff by hand. They must compare the signature on the authorization card to a signature sample provided by the carrier. One NMB attorney told us she has had instances when she checked more than 4,000 cards, and NMB officials said all NMB staff check cards when necessary. NMB has also faced challenges in its investigations of alleged interference, influence, or coercion as communications about elections increasingly occur via the Internet. The use of social media, hyperlinks to online voting websites, and other evolving communication tools have expanded the agency’s investigative responsibilities. An NMB attorney said one investigation occurred after airline employees posted photos of themselves holding confidential voting materials on Facebook. Another case required an investigation of potential coercion when employees hosted voting parties that were shared on YouTube. Sharing voting materials and voting in groups violates federal law, NMB officials said. In February 2008, NMB issued a policy prohibiting hyperlinks to an online voting site from any website other than its own. For example, in 2011, while investigating an allegation of interference after an airline merger election, NMB found that posting a hyperlink on a union website to the voting website might constitute interference because, while it did not compromise the voting process, it had the potential to reveal a voter’s identity through his or her Internet identification. As a result, the union’s win in the election stood but NMB shortened the timeframe for holding another election from 2 years to 18 months. NMB in 2010 changed its rules for certifying a union in an election, and several stakeholders told us and a former NMB board member wrote at the time that this process harmed the perception of NMB as a nonpartisan arbiter and caused disagreement. Prior to the change, which was effective July 1, 2010, employees who did not vote in an election were counted as having cast a vote against certifying the union under NMB regulations. After the change, only votes cast are counted in determining whether a union has achieved the majority needed to be certified as the employee representative. Several stakeholders said NMB was hasty and lacked input in making a change that overturned 75 years of precedent. NMB received almost 25,000 comments in response to its notice of proposed rulemaking. NMB held a public hearing on December 7, 2009, on the proposed rule. While a union official said that NMB used a deliberative process for making the rule change, one management stakeholder said NMB should have alerted them that such a change was being considered far in advance of a hearing. In addition, an airline carrier official said that because the change was suggested in a letter to NMB from one large union, and this union’s involvement was not initially disclosed by NMB officials, some stakeholders were suspicious about NMB’s motives for changing the rule. In the past, according to another official from a regional air carrier, NMB involved stakeholders before making significant changes. For example, the official said NMB consulted with stakeholders over several years before moving to online voting. A railroad official said the process was very damaging to NMB’s reputation for neutrality. However, another railroad official said trust in NMB remains intact. Despite concerns, NMB officials reported that the percentage of elections resulting in certification of a union has remained relatively constant in the years before and after NMB’s 2010 rulemaking. NMB data for the 11 fiscal years (2000 to 2010) before the rule change show that, on average, about 61 percent of all elections resulted in certification of a union. NMB data for the 3 fiscal years after the change (2011 to 2013) show that, on average, about 62 percent of all elections resulted in certification of a union. At the time of our report, figures for fiscal year 2013 were incomplete. In addition, these percentages are not weighted to account for the size of the elections, nor controlled for other factors that can affect the outcome of an election, for example, job market conditions. Some stakeholders also wanted NMB, as part of the 2010 rulemaking, to clarify the process for decertifying, or removing, a union representative. The RLA does not specify a decertification process, and NMB offers minimal guidance on its website on steps to remove an employee representative. In its preamble to the 2010 rule, NMB noted that, while not as direct as some commenters might like, the existing election procedures allow employees to “rid themselves of a representative,” and that the 2010 change further gives these employees the opportunity to affirmatively cast a ballot for no representation. However, an airline carrier official and a former board member said the process in place remains ineffective and highly confusing. For example, a ballot currently may contain two options that are each a vote for no representation: “no representative,” and an applicant who is on the ballot as a “straw man” who intends, if elected, to step down so as to remove representation for the craft or class. This applicant seeking removal of representation has to collect sufficient authorization cards to prompt an election in order for the craft or class to make this change. A former NMB board member said there is the potential for votes opposed to union representation to be split by votes for “no representative” and for a straw man. The result is that these vote counts will not be consolidated in favor of decertification, which can then happen only if either the “no representative” or straw man receives a majority of the votes cast. When asked whether NMB has developed written guidance to walk parties through the process should they wish to drop union representation, attorneys in the Office of Legal Affairs said adequate guidance is available on NMB’s website and in the NMB Representation Manual. According to stakeholders we interviewed, NMB improved its outreach when issuing a more recent rule in 2012. The final rule, published in December 2012, made some changes to NMB’s representation process to conform with requirements in the FAA Modernization and Reform Act of 2012. For example, the Act amended the RLA to require that an application as part of a merger and requesting an organization or individual be certified for representation must be supported by not less than 50 percent of the employees in a the craft or class. Previously, the NMB Representation Manual required a 35-percent showing when a group of employees not already represented by a union wished to have an election. In its notice of proposed rulemaking, NMB explicitly sought comments on whether the 50-percent requirement applied to mergers. One stakeholder we interviewed said NMB did a better job of reaching out to stakeholders in the most recent rulemaking, and an airline carrier official characterized this as a positive “returning to form” for NMB. The National Mediation Board is a small agency with a vital role in facilitating labor relations and helping avoid work stoppages in two key transportation sectors: the railroads and airlines. To better fulfill this role, NMB has in recent years instituted organizational changes, technology upgrades, and several new management practices. However, its initiatives are not guided by a robust strategic planning process, which—if done well—provides the basis for everything an organization does. Without such a process that includes mechanisms for obtaining stakeholder and congressional input, NMB lacks assurance that its limited resources are effectively targeted toward the highest priorities. Further, without performance goals needed to gauge agency progress, NMB, Congress, and stakeholders lack the information needed to improve management practices and better link resources to results at the agency. Implementing cost-effective and appropriate internal controls can help an agency achieve results and minimize operational problems. While NMB has controls in key management areas, such as financial management, there are challenges. For example, without a formal mechanism to track and promptly resolve deficiencies identified in reviews of its program and management areas, NMB misses opportunities to improve performance and mitigate risks. NMB’s failure to follow federal requirements for recent upgrades of its information technology systems and equipment means NMB cannot ensure the best use of its limited resources or the protection of sensitive information. A lack of strategic workforce planning means NMB does not have assurance that its staff will continue to possess the skills necessary to the agency’s mission and receive the training they require, particularly to meet challenges presented by the impending retirements of senior staff. NMB faces additional management issues. There is minimal internal programmatic oversight of its activities, for example, by an Inspector General, to identify management challenges and to hold it accountable to Congress and the public. NMB also struggles to efficiently manage grievance cases in the rail industry, partly because the parties do not pay for the arbitrator and thus lack an incentive to only file and progress cases with merit. NMB has considered ways to increase efficiency, such as charging application fees for its arbitration services, and has undertaken other efforts, such as conducting reviews of pending cases. However, it lacks data on the types of grievances that are filed to inform its deliberations on how to more efficiently manage the process. If NMB does not address this demand on its limited resources, it may again face a growing and unmanageable backlog of arbitration cases. To provide for independent audit and investigative oversight of NMB, Congress should consider authorizing an appropriate federal agency’s Office of Inspector General to provide such oversight. In order to improve NMB’s planning and make the most effective use of its limited resources, we recommend that the Chairman of the National Mediation Board take the following seven actions: 1. Develop a formal strategic planning process to fully implement key required elements of strategic planning, including a formal process to obtain congressional and stakeholder input. 2. Develop, and include in its performance plan, performance goals and measures that contain required elements to demonstrate results. 3. Develop and implement a formal mechanism to ensure the prompt resolution of findings and recommendations by independent auditors, including clearly assigning responsibility for this follow-up to agency management. 4. Develop and fully implement key components of an information security program in accordance with FISMA. 5. Establish a privacy program that includes conducting privacy impact assessments and issuing system of record notices for systems that contain personally identifiable information. 6. Develop a strategic workforce plan that (1) involves input from top management, employees, and other stakeholders; (2) identifies critical skills and competencies needed by NMB; (3) identifies strategies, such as training, to address any gaps; and (4) provides for cost-effective evaluations of these strategic workforce planning efforts. This plan should also address succession for the significant proportion of NMB staff and senior managers who are eligible to retire in the next few years. 7. In order to better inform its decisions about managing the rail grievance arbitration process, including addressing the backlog of cases, NMB should collect and analyze data on the types of grievances filed, and their disposition. NMB should use these data to improve the efficiency of the arbitration process and consider, as part of this effort, whether to establish fees for arbitration services. If NMB determines that the establishment of fees would improve the efficiency of the arbitration process, it should impose such fees or seek legislative authority to do so, as necessary. We provided a draft of this product to the National Mediation Board (NMB) for comment. We also shared a draft with the Office of Management and Budget (OMB) and Office of Personnel Management (OPM). OMB and OPM had no comments. In its written comments, reproduced in appendix II, NMB noted that it will review and address all of the recommendations and discussed ways that the agency plans to address them. NMB also provided technical comments that were incorporated, as appropriate. Although our matter for congressional consideration was not directed to NMB, the agency suggested that having another federal agency’s Office of Inspector General provide oversight of NMB would be redundant with GAO’s biennial audits and evaluations, which were mandated by the FAA Modernization and Reform Act of 2012. As we have previously reported, GAO and agency Inspectors General (IGs) have complementary, rather than duplicative, roles. The IGs have been on the front line of combating fraud, waste, and abuse within their respective agencies, and their work has generally concentrated on audits and investigations of specific program-related issues of immediate concern. We continue to believe that, in addition to the periodic oversight provided by GAO and the annual audits of NMB’s financial statements by independent public accountants, an IG office assigned with the responsibility of providing ongoing audits and investigations of NMB and its operations would result in more effective oversight. Regarding our first recommendation to establish a formal strategic planning process, including a formal process to obtain congressional and stakeholder input, NMB noted that it would replace its current strategic plan by February 2014 using guidelines we outlined in this report. These guidelines include soliciting comments on a draft strategic plan from Congress and stakeholders. In its comments, NMB asserted it currently has formal avenues to obtain stakeholder input and listed several stakeholder groups and industry conferences. While we acknowledge in our report that NMB consults with such groups, representatives of key groups told us they are not currently involved in NMB’s strategic planning process. Formally involving stakeholders and Congress in the strategic planning process is critical for ensuring that NMB’s efforts and resources are targeted to the highest priorities. Moreover, such consultations should occur during the development of a strategic plan, not after it is developed. We have long noted the importance of the executive branch considering Congress a partner in shaping goals at the outset. If an agency waits to consult with relevant congressional stakeholders until a strategic plan has been substantially drafted and fully vetted, it foregoes important opportunities to learn about and address early on specific concerns that could be critical to successful implementation of the plan. NMB noted, with respect to our second recommendation to develop performance goals and measures, that it will include such goals and measures in its new February 2014 strategic plan. It is also important for NMB to include these goals and measures in its annual performance plans, as required by law, which can be consolidated with the agency’s congressional budget submissions or annual performance reports, per OMB guidance. These documents are updated on an annual basis—while strategic plans are updated every 4 years—and allow for greater transparency in reporting NMB’s progress toward meeting its goals. NMB also indicated it will soon develop and publish a plan to address our third recommendation to establish a mechanism to ensure prompt resolution of findings and recommendations by independent auditors and clearly assign responsibility for follow-up to agency management. Regarding recommendation four to develop and fully implement components of an information security program, NMB stated that it was meeting its information security requirements through the Federal Risk and Authorization Management Program (FedRAMP) and that the program is compliant with the Federal Information Security Management Act of 2002 and based on National Institute of Standards and Technology guidelines. NMB further discussed its plans to create standards for security and that they plan to be fully FedRAMP compliant by June 2014. We acknowledge in our report some of the actions NMB has taken to comply with federal information security program requirements but also noted weaknesses with the implementation of these requirements. While the FedRAMP assessment process is compliant with federal information security program requirements, for those FedRAMP services NMB is using, that does not diminish NMB’s responsibility with regard to the development and implementation of its information security program, including the development of its own requirements. For example, NMB should ensure that it completes its draft policies and procedures, provides information security training to its users and contractors, and updates its disaster recovery and continuity of operations plans and procedures. We continue to believe that our recommendation to develop and fully implement components of an information security program is still valid. In response to recommendation five regarding the establishment of a privacy program, NMB stated that it holds very little personally identifiable information and that it has or is in the process of contracting with other agencies to ensure that they handle virtually all of NMB’s personally identifiable information. However, as we noted in our report, NMB has not developed policies or procedures that discuss privacy protections for the personally identifiable information it currently holds, and does not comply with federal requirements to conduct privacy impact assessments and issue system of records notices if deemed necessary. Thus we believe our recommendation is still valid. In commenting, NMB did not identify any specific actions it plans to take to address our sixth recommendation to develop a strategic workforce plan. NMB stated that the board and Chief of Staff regularly review staffing and personnel issues and plan for succession, but the agency’s relatively small workforce means NMB is less able to use workforce planning. NMB also discussed its training process, which we describe in detail in the report, and said no request for training has been refused in the past 3 fiscal years. We continue to believe a strategic workforce plan is essential for every federal agency, regardless of its size. Such a plan helps ensure an organization’s human capital program is aligned with its mission and programmatic goals, and that it has long-term strategies for acquiring, developing, and retaining staff to achieve those goals. Finally, in commenting on recommendation seven to analyze data on the types and disposition of rail grievance arbitration cases to better inform its management of those processes, NMB officials stated that they plan to collect additional data on the types of cases they receive from all arbitration boards and make the information available on the agency’s website. We reiterate the importance of analyzing this information in order to better manage this process. NMB also cited its efforts to reduce the backlog, noting that the number of arbitration cases pending at the end of fiscal year 2012 was the lowest in 5 years. However, as we note in the report, there were 2,084 claims pending at the beginning of fiscal year 2013, which is still significant, and more than 3,500 new claims were filed in fiscal year 2012. We encourage NMB to maintain its focus on managing the arbitration process and continue to consider options, such as a fee structure, for achieving a more efficient use of resources. We are sending copies of this report to the Chairman of NMB and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To conduct our work, we assessed how the National Mediation Board (NMB) carried out selected key management functions—strategic planning and performance measurement, financial management, information technology, human capital, and procurement—we have found are critical to creating and sustaining high-performing organizations. We also assessed how NMB is addressing challenges in one of its three program areas—oversight of union representation elections. Most of these areas were specifically identified in the FAA Modernization and Reform Act of 2012 as required subjects of our review. We also selected strategic planning and performance measurement for our review because we have found that an agency’s strategic planning effort is the most important element in results-oriented management. To perform our work, we reviewed relevant federal laws and regulations and key NMB documents, such as the agency’s most recent strategic plan and annual report, policies and procedures for NMB’s three program areas (mediation, arbitration, and representation), board member briefing book, and delegation orders for duties of board and staff for fiscal years 1990 through 2013. We also assessed NMB’s management plans, policies, and practices in financial management, information management and security, human capital, and procurement using GAO’s Standards for Internal Control in the Federal Government and other criteria developed by GAO in prior work for these management areas, as described later in this appendix. We identified key practices from these criteria and assessed whether NMB is following these practices (NMB is taking appropriate actions and has a formal plan, policy, or other document), partially following (is taking some actions but does not have a formal plan or policy and/or some additional steps must be taken to consider this practice implemented), or minimally following (NMB is taking little or no action to address this particular practice). We also interviewed officials at the Office of Management and Budget (OMB) and the Office of Personnel Management (OPM) to determine how these agencies provide oversight and guidance to NMB, as well as to obtain officials’ perspectives on NMB’s management strengths and challenges. In cases where other entities had conducted recent work in any of the management functions we reviewed—such as financial management—we did not duplicate that work. For example, we obtained the most recent nine financial statement reports NMB provided to us, but we did not independently assess the findings in the audits. In addition, we reviewed but did not duplicate the work performed by NMB’s independent auditors in the six most recently issued internal control reviews of NMB program areas: Report on Review of Internal Controls Over Representation Functions (fiscal year 2005) Report on Review of Internal Controls Over Alternative Dispute Resolution (ADR) Services (fiscal year 2006) Report on Review of Internal Controls Over Mediation Services (October 2006-September 2007) Report on Review of Internal Controls Over Personnel/Payroll (April 2008-March 2009) Report on Review of Internal Controls Over Arbitration Services Functions (April 2009-March 2010) Report on Review of Internal Controls Over Procurement (October 2010-September 2011) To gain additional perspectives on NMB’s program and management functions, we interviewed officials at the two primary federal labor relations agencies that cover labor relations in the private sector. We interviewed senior officials from the Federal Mediation & Conciliation Service to obtain information on its strategic planning and performance measurement efforts, human capital and training practices, and its role in mediation and arbitration. We also interviewed senior officials from the National Labor Relations Board to obtain information on strategic planning and performance measurement, human capital practices, and its role in overseeing union representation elections. We interviewed the chairman of a third labor relations agency—the Federal Labor Relations Authority, which covers labor relations in the federal government—but the primary purpose of the interview was to obtain this individual’s perspective as a former NMB board member. In addition, we conducted in-depth interviews with current and former NMB officials, including all senior managers (Chief of Staff, General Counsel, and the directors of the Office of Administration, Arbitration Services, and Mediation and ADR Services); current board members; and all former board members who served from 2000 to 2012. We interviewed several rail and air industry experts, identified by issue area experts within GAO and from our literature review. We also interviewed representatives from key rail and air labor and management groups, including Airlines for America, Regional Airline Association, National Railway Labor Conference, AFL-CIO Transportation Trades Department and affiliated rail and air unions, and the International Brotherhood of Teamsters. In addition, we interviewed representatives from the National Association of Railroad Referees, an association representing railroad arbitrators. We also interviewed members of NMB informal advisory groups, including the Dunlop II Committee and Section 3 Subcommittee, which made recommendations to NMB on mediation and arbitration issues, respectively. Finally, to gain a better understanding of the types of rail grievance arbitration cases that NMB funds, we reviewed a judgmental sample of several arbitration cases available on NMB’s website that were decided by arbitration boards in calendar year 2012. We conducted this performance audit from September 2012 through December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We reviewed NMB’s most recent 5-year strategic plan (fiscal years 2011- 2016), its most recent annual Performance and Accountability Report (fiscal year 2012), and fiscal year 2014 congressional budget justification documents. We also compared NMB’s strategic planning and performance planning and reporting practices with key requirements in the Government Performance and Results Act of 1993 (GPRA), the GPRA Modernization Act of 2010 (GPRAMA), and OMB performance guidance for federal agencies. In addition to these federal requirements, we assessed NMB’s plans and practices against leading practices identified in prior GAO work. We also obtained NMB’s financial statement audits for fiscal years 2004 to 2012, auditors’ management letters to NMB for fiscal years 2008, 2010, 2011, and 2012, and the six most recently issued internal control reviews of NMB program and management areas, as discussed earlier. We also interviewed NMB’s independent auditors about their processes, findings, and recommendations for NMB. We did not independently assess the findings in these financial statement audits and internal control reviews. We interviewed NMB officials responsible for the overall planning and management of NMB’s information management, security, and privacy. We collected and assessed information on NMB’s information systems, including its plans, policies, and practices using the Federal Information Security Management Act, guidelines developed by OMB, guidelines developed by the National Institute of Standards and Technology, the Privacy Act and the E-Government Act. We also randomly selected and tested NMB user workstations for compliance with U.S. Government Configuration Baseline settings, randomly selected and reviewed NMB- issued smartphones and tablets for mobile device security settings, and reviewed NMB cloud computing security settings within its new cloud environment. We reviewed NMB’s most recent human capital plans, policies, and key documents, including NMB’s 2011 human capital management report to OPM, its 2009 human capital plan, and its 2006 succession plan. We also obtained information on the training NMB staff received in fiscal years 2011 and 2012. We reviewed OPM’s fiscal year 2007 audit of NMB’s human capital program, its most recent assessment (2011) of NMB’s human capital management report, and the results of its 2012 Federal Employee Viewpoint Survey of NMB employees. We also reviewed data regarding NMB’s human capital strengths (defined by OPM as the 10 survey items with the highest percent of positive responses) and challenges (the 10 survey items with the highest percent of negative responses). On the basis of our interview with knowledgeable OPM officials and review of prior GAO work concerning survey design, administration, and processing, we determined that the data were sufficiently reliable for the purpose of our review. We assessed NMB’s strategic workforce planning efforts against government internal control standards and key practices identified in prior GAO work. We focused on workforce planning in this review because it is a critical, foundational element of an agency’s human capital program in that it helps ensure that agencies are making sound investments in the human capital issues that most affect their ability to achieve mission results. We reviewed NMB’s procurement contracting cycle procedures as of March 2012 and NMB’s independent auditors’ most recent report on internal controls over the procurement function. We reviewed key documents, such as purchase requests and purchase orders, for the agency’s recent acquisition of information technology equipment, including tablet computers, laptops, and smartphones. We assessed NMB’s policies and practices against federal internal control standards and standards outlined in the Federal Acquisition Regulation and GAO’s Framework for Assessing the Acquisition Function at Federal Agencies. To describe NMB’s process for overseeing employee representation elections and any challenges related to this process, we reviewed relevant federal laws and regulations and records related to NMB rulemakings on this subject, including proposed and final rules issued in 2010 and 2012. We also reviewed key documents, such as the National Mediation Board Representation Manual (effective March 25, 2013), notices and memorandums on revisions to phone and Internet voting instructions, electronic filing procedures for elections and challenges, frequently asked questions (revised 2013), and sample ballots. We obtained NMB officials’ and stakeholders’ perspectives on the challenges the agency faces in this process, as well as steps it has taken to address any challenges. For example, we reviewed select opinions issued by attorneys in the Office of Legal Affairs in fiscal years 2000 to 2013 to resolve challenges to voter eligibility and to allegations of election interference, influence, and coercion. We also reviewed and assessed the reliability of NMB data on the number and outcomes of union representation elections by fiscal year before and after implementation of the 2010 rule change. We compared NMB’s figures to our own calculations of the annual number of and outcomes for representation elections for fiscal years 2000 to 2013 (before and after the 2010 rule change), and the percentages of total elections that resulted in certification of a union representative, using data for each fiscal year posted on NMB’s website. In addition, we interviewed knowledgeable NMB officials responsible for compiling and reviewing these data. We determined the data were sufficiently reliable for the purpose of our review. In addition to the contact named above, Gretta Goodwin (Assistant Director), Rachael Chamberlin, Susan Aschoff, Alison Grantham, Shirley Abel, Alexander Galuten, Anjalique Lawrence, James Rebbe, Shaunyce Wallace, and Candice Wright made significant contributions to this report. In addition, key support was provided by Edward Alexander, Jr.; Elizabeth Curda; Benjamin Licht; Steven Lozano; and Walter Vance. | A small federal agency, NMB facilitates labor relations in two key transportation sectors--railroads and airlines-- through mediation and arbitration of labor disputes and overseeing union elections. Established under the Railway Labor Act, NMB's primary responsibility is to prevent work stoppages in these critical industries. The FAA Modernization and Reform Act of 2012 required GAO to evaluate NMB programs and activities. GAO examined NMB's (1) strategic planning and performance measurement practices; (2) controls for key management areas; and (3) challenges, if any, in overseeing elections. GAO assessed NMB's management practices using internal control standards and other GAO criteria; interviewed NMB officials, current and former board members, and key stakeholders from rail and air labor and management groups; and reviewed relevant federal laws, regulations, and NMB policies. The National Mediation Board (NMB) recently updated its strategic plan but is not meeting some federal strategic planning and performance measurement requirements. NMB missed deadlines for updating its strategic plan and lacks performance measures to assess its progress in meeting its goals, even though an agency's strategic plan should form the basis for everything an agency does. NMB also lacks some controls in key management areas that could risk its resources and its success: Financial accountability: NMB contracts for annual financial statement audits and internal control reviews. However, it lacks a formal process for addressing identified deficiencies, a key internal control. Information technology: NMB recently transitioned to new information technology systems but is missing key management and security controls, including an information security program that fully implements federal requirements. Human capital: NMB has taken steps to improve its human capital program but improvements are still needed. Although all NMB senior managers are eligible for retirement, NMB has not engaged in formal workforce planning to identify gaps in staff skills, and strategies, such as training, to address them. Procurement: NMB has established some key procurement policies and controls but weaknesses remain. For example, in a recent purchase of tablet computers for some staff, NMB did not follow its own procedures to assess the need for the devices or solicit competition. Other management issues: NMB does not have an Inspector General (IG), and oversight by other federal agencies is limited. NMB also has a significant number of pending rail arbitration cases, and it lacks complete data on the types of cases filed to help it address the backlog and the costs. NMB has adapted to challenges presented by large union elections resulting from airline mergers and has implemented improvements such as online voting. In 2010, NMB changed its rules for determining a majority in union elections. While this process caused disagreement among some stakeholders, NMB data suggest that the percentage of elections in which a union was certified has, thus far, remained relatively constant in the years before and after the rule change. Congress should consider authorizing an appropriate federal agency's IG to provide oversight of NMB. NMB should implement a formal strategic planning process and develop performance goals and measures to meet federal requirements, develop a process to address audit findings, implement key components of an information security and privacy program, and engage in strategic workforce planning. NMB should also collect and analyze data on the types of rail grievances filed to help improve efficiency in its arbitration process. In commenting on a draft of this report, NMB said it would address our recommendations and described actions it plans to take. |
Our reviews of the SBIR program between 1985 and 1999 found numerous examples of program successes such as the following: Funding high-quality research. Throughout the life of the program, awards have been based on technical merit and are generally of good quality. Encouraging widespread competition. The SBIR program successfully attracts many qualified companies, has had a high level of competition, consistently has had a high number of first-time participants, and attracts hundreds of new companies annually. Providing effective outreach. SBIR agencies consistently reach out to foster participation by women-owned or socially and economically disadvantaged small businesses by participating in regional small business conferences and workshops targeting these types of small businesses. Increasing successful commercialization. At various points in the life of the program we have reported that SBIR has succeeded in increasing private sector commercialization of innovations. Helping to serve mission needs. SBIR has helped serve agencies’ missions and R&D needs, although we found that agencies differ in the emphasis they place on funding research to support their mission versus more generalized research. Our reviews of the SBIR program during that time have also identified a number of areas of weakness that, over time, have been either fully or partially addressed by the Congress in reauthorizing the program or by the agencies themselves. For example, Duplicate funding. In 1995, we identified duplicate funding for similar, or even identical, research projects by more than one agency. A few companies received funding for the same proposals two, three, and even five times before agencies became aware of the duplication. Contributing factors included the fraudulent evasion of disclosure by companies applying for awards, the lack of a consistent definition for key terms such as “similar research,” and the lack of interagency sharing of data on awards. To address these concerns, we recommended that SBA take three actions: (1) determine if the certification form needed to be improved and make any necessary revisions, (2) develop definitions and guidelines for what constitutes “duplicative” research, and (3) provide interagency access to current information regarding SBIR awards In response to our recommendations, SBA strengthened the language agencies use in their application packages to clearly warn applicants about the illegality of entering into multiple agreements for essentially the same effort. In addition, SBA planned to develop Internet capabilities to provide SBIR data access for all of the agencies. Inconsistent interpretations of extramural research budgets. In 1998, we found that while agency officials adhered to SBIR’s program and statutory funding requirements, they used differing interpretations of how to calculate their “extramural research budgets.” As a result, some agencies were inappropriately including or excluding some types of expenses. We recommended that SBA provide additional guidance on how participating agencies were to calculate their extramural research budgets. The Congress addressed this program weakness in 2000, when it required that the agencies report annually to SBA on the methods used to calculate their extramural research budgets. Geographical concentration of awards. In 1999, in response to congressional concerns about the geographical concentration of SBIR awards, we reported that companies in a small number of states, especially California and Massachusetts, had submitted the most proposals and won the majority of awards. The distribution of awards generally followed the pattern of distribution of non-SBIR expenditures for R&D, venture capital investments, and academic research funds. We reported that some agencies had undertaken efforts to broaden the geographic distribution of awards. In the 2000 reauthorization of the program, the Congress directed the SBA Administrator to establish the Federal and State Technology (FAST) Partnership Program to help strengthen the technological competitiveness of small businesses, especially in those states that receive fewer SBIR grants. The FAST Program was not reauthorized when it expired in 2005. In 2006 when we looked at the geographical concentration of awards made by DOD and NIH, we found that while a firm in every state received at least one SBIR award from both agencies, SBIR awards continued to be concentrated in a handful of states and about one third of awards had been made to firms in California and Massachusetts. Clarification on commercialization and other SBIR goals. Finally, in 2000, the Congress directed the SBA Administrator to require companies applying for a phase II award to include a commercialization plan with their SBIR proposals. This addressed our continuing concern that clarification was needed on the relative emphasis that agencies should give to a company’s commercialization record and SBIR’s other goals when evaluating proposals. In addition, in 2001, SBA initiated efforts to develop standard criteria for measuring commercial and other outcomes of the SBIR program and incorporate these criteria into its Tech-Net database. In fiscal year 2002, SBA further enhanced the reporting system to include commercialization results that would help establish an initial baseline rate of commercialization. In addition, small business firms participating in the SBIR program are required to provide information annually on sales and investments associated with their SBIR projects. Many of the solutions cited above to improve and strengthen the SBIR program relied to some extent on the collection of data or the establishment of a government-use database, so that SBA and participating agencies could share information and enhance their efforts to monitor and evaluate the program. However, in 2006, we reported that SBA was 5 years behind schedule in complying with the congressional mandate to develop a government database that could facilitate agencies’ monitoring and evaluation of the program. We also reported that the information SBA was collecting for the database was incomplete and inconsistent, thereby limiting its usefulness for program evaluations. Specifically, we identified the following concerns with SBA’s data-gathering efforts: SBA had not met its obligation to implement a restricted government-use database that would allow SBIR program evaluation as directed by the 2000 SBIR reauthorization act. As outlined in the legislation, SBA, in consultation with federal agencies participating in the SBIR program, was to develop a secure database by June 2001 and maintain it for program evaluation purposes by the federal government and certain other entities. SBA planned to meet this requirement by expanding the existing Tech-Net database to include a restricted government-use section that would be accessible only to government agencies and other authorized users. In constructing the government-use section of the database, SBA planned to supplement data already gathered for the public-use section of the Tech- Net database with information from SBIR recipients and from participating agencies on commercialization outcomes for phase II SBIR awards. However, according to SBA officials, the agency was unable to meet the statutory requirement, primarily because of increased security and other information technology project requirements, agency management changes, and budgetary constraints. When we reported on this lack of compliance with the database mandate, SBA told us that it anticipated having the government-use section of the Tech-Net database operational early in fiscal year 2007. However, according to an SBA official, the database became operational in October 2008, and agencies have begun to provide data on their SBIR programs using the Internet. While federal agencies participating in the SBIR program submitted a wide range of descriptive award information to SBA annually, these agencies did not consistently provide all of the required data elements. As outlined in SBA’s policy directive, each year, SBIR participating agencies are required to collect and maintain information from recipients and provide it to SBA so that it can be included in the Tech-Net database. Specifically, the policy directive established over 40 data elements for participating agencies to report for each SBIR award they make; a number of these elements are required. These data include award-specific information, such as the date and amount of the award, an abstract of the project funded by the award, and a unique tracking number for each award. Participating agencies are also required to provide data about the award recipient, such as gender and socio-economic status, and information about the type of firms that received the awards, such as the number of employees and geographic location. Much of the data participating agencies collected are provided by the SBIR applicants when they apply for an award. Agencies provide additional information, such as the grant/contract number and the dollar amount of the award, after the award is made. For the most part, all of the agencies we reviewed in 2006 provided the majority of the data elements outlined in the policy directive. However, some of the agencies were not providing the full range of required data elements. As a result, SBA did not have complete information on the characteristics of all SBIR awards made by the agencies. SBA officials told us that agencies did not routinely provide all of the data elements outlined in the policy directive because either they did not capture the information in their agency databases or they were not requesting the information from the SBIR applicants. Officials at the participating agencies cited additional reasons for the incomplete data they provided to SBA. For example, some officials noted that SBA’s Tech- Net annual reporting requirements often change and others said that if the company or contact information changes and the SBIR recipient fails to provide updated information to the agency, the agency cannot provide this information to SBA. Participating agencies were providing some data that are inconsistent with SBA’s formatting guidance, and while some of these inconsistencies were corrected by SBA’s quality assurance processes, others were not. In 2006, we determined that almost a quarter of the data provided by five of the eight agencies we reviewed was incorrectly formatted for one or more fields in the Tech-Net database. As a result, we concluded that these inconsistent or inaccurate data elements compromised the value of the database for program evaluation purposes. SBA’s quality assurance efforts focus on obtaining complete and accurate data for those fields essential to tracking specific awards, such as the tracking number and award amount, rather than on those fields that contain demographic information about the award recipient. We found that SBA electronically checked the data submitted by the participating agencies to locate and reformat inconsistencies, but it did not take steps to ensure that all agency-provided data were accurate and complete. We also determined that inconsistencies or inaccuracies could arise in certain data fields because SBA interpreted the absence of certain data elements as a negative entry without confirming the accuracy of such an interpretation with the agency. As we reported in 2006, such inaccuracies and inconsistencies were a concern because information in the Tech-Net database would be used to populate the government-use section of the database that SBA was developing (as discussed above) to support SBIR program evaluations. However, at the time of our review, SBA had no plans to correct any of the errors or inconsistencies in the database that related to the historical data already collected. As a result, we concluded that the errors in the existing database would migrate to the government-use section of the database and would compromise the usefulness of the government-use database for program evaluation and monitoring purposes. To address the concerns that we identified with regard to the quality of the data that SBA was collecting for the Tech-Net database, we recommended in our 2006 report that SBA work with the participating agencies to strengthen the completeness, accuracy, and consistency of its data collection efforts. According to an SBA official, the database is currently operational and agencies have entered data for fiscal years 2007 and 2008 over the Internet. Moreover, according to this official, the system is set up in such a way that it does not accept incorrectly formatted data. In 2006, we also found that SBA and some participating agencies focused on a few select criteria for determining applicants’ eligibility for SBIR awards. Specifically, we reviewed DOD’s, NIH’s, and SBA’s processes to determine eligibility of applicants for the SBIR program and found that they focused largely on three SBIR criteria in their eligibility reviews— ownership, size in terms of the number of employees, and for-profit status of SBIR applicants. Although agency officials also told us that they consider information on the full range of criteria, such as whether the principal investigator is employed primarily by the applying firm, and the extent to which work on the project will be performed by others. Moreover, we found that both NIH and DOD largely relied on applicants to self-certify that they met all of the SBIR eligibility criteria as part of their SBIR applications. For example, at NIH, applicants certified that they met the eligibility criteria by completing a verification statement when NIH notified them that their application had been selected for funding but before NIH made the award. The verification statement directs applicants to respond to a series of questions relating to for-profit status, ownership, number of employees, where the work would be performed, and the primary employment of the principal investigator, among others. Similarly, DOD’s cover sheet for each SBIR application directs applicants to certify that they met the program’s eligibility criteria. NIH and DOD would not fund applications if the questions on their agency’s verification statement or cover sheet were not answered. Both NIH and DOD also warned applicants of the civil and criminal penalties for making false, fictitious, or fraudulent statements. In some cases the agencies made additional efforts to ensure the accuracy of the information applicants provided when they observed certain discrepancies in the applications. In 2006, we reported that when officials at the agencies had unresolved concerns about the accuracy of an applicant’s eligibility information, they referred the matter to SBA to make an eligibility determination. We found that when SBA received a letter from the agency detailing its concerns, SBA officials contacted the applicants and asked them to re-certify their eligibility status and might request additional documentation on the criteria of concern. Upon making a determination of eligibility, SBA then notified the official at the inquiring agency, and the applicant, of its decision. Although, SBA made the information about firms it found ineligible publicly available on its Web site so that all participating agencies and the public could access the information, we found that it did not consistently include information on the Web site identifying whether or not the determination was for the SBIR program. An SBA official told us the agency planned to include such information on its Web site more systematically before the end of fiscal year 2006. Once the agencies received information about applicants’ eligibility they also had different approaches for retaining and sharing this information. For example, while both NIH and DOD noted the determination of ineligibility in the applicant’s file, NIH also centrally tracked ineligible firms and made this information available to all of its institutes and centers that make SBIR awards. In contrast, DOD did not have a centralized process to share the information across its awarding components, although DOD officials told us it was common practice for awarding components to share such information electronically. In conclusion, Mr. Chairman, while the SBIR program is generally recognized as a successful program that has encouraged innovation and helped federal agencies achieve their R&D goals, it has continued to suffer from some long-standing evaluation and monitoring issues that are made more difficult because of a lack of accurate, reliable, and comprehensive information on SBIR applicants and awards. The Congress recognized the need for a comprehensive database in 2000 when it mandated that SBA develop a government-use database. Although SBA did not meet its statutorily mandated deadline of June 2001, the database has been operational since October 2008, and contains limited new information but may also contain inaccurate historical data. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the Committee may have. For further information about this statement, please contact me at (202) 512-3841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Vondalee Hunt, Anu Mittal, and Cheryl Williams also made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Small Business Innovation Development Act of 1982 established the Small Business Innovation Research program (SBIR) to stimulate technological innovation, use small businesses to meet federal research and development (R&D) needs, foster and encourage participation by minority and disadvantaged persons in technological innovation, and increase private sector commercialization of innovations derived from federal R&D. Since the program's inception, GAO has conducted numerous reviews of the SBIR program. This statement summarizes GAO's past findings on the SBIR program's (1) successes and challenges, (2) data collection issues that affect program monitoring and evaluation, and (3) how agencies make eligibility determinations for the program. GAO is not making any new recommendations in this statement. Between July 1985 and June 1999, GAO found that the SBIR program was achieving its goals to enhance the role of small businesses in federal R&D, stimulate commercialization of research results, and support the participation of small businesses owned by women and/or disadvantaged persons. More specifically, GAO found that throughout the life of the program, awards have been based on technical merit and are generally of good quality. In addition, the SBIR program successfully attracts many qualified companies, has had a high level of competition, consistently has had a high number of first-time participants, and attracts hundreds of new companies annually. Further, SBIR has helped serve agencies' missions and R&D needs; although GAO found that agencies differ in the emphasis they place on funding research to support their mission versus more generalized research. During these reviews GAO also identified areas of weakness and made recommendations that could strengthen the program further. Many of these recommendations have been either fully or partially addressed by the Congress in various reauthorizations of the program or by the agencies themselves. For example, in 2005, GAO found that the issue of how to assess the performance of the SBIR program remains somewhat unresolved after almost two decades, and identified data and information gaps that make assessment of the SBIR program a challenge. Many of the solutions to improve the SBIR program could be addressed, in part, by collecting better data and establishing a government-use database, so that SBA and participating agencies can share information and enhance their efforts to monitor and evaluate the program. However, in 2006, GAO reported that SBA was 5 years behind schedule in complying with a congressional mandate to develop a government-use database that could facilitate agencies' monitoring and evaluation efforts. Moreover, the information that SBA was collecting for the database was incomplete and inconsistent, thereby limiting its usefulness. In 2006, SBA told GAO that it expected to have the government-use database operational early in fiscal year 2007. However, the database did not become operational until October 2008 and currently contains 2 years of new data, according to an SBA official. The database also does not permit information to be entered in an inconsistent format. In 2006, GAO also found that SBA, NIH, and DOD focus on a few select criteria to determine the eligibility of applicants for SBIR awards. GAO reported that both NIH and DOD largely relied on applicants to self-certify that they met all of the SBIR eligibility criteria as part of their SBIR applications, although both made additional efforts to ensure the accuracy of the information when they observed discrepancies in the applications. When the agencies were unable to verify the eligibility of an applicant, they referred the application to SBA for an eligibility determination. GAO found that when SBA finds an applicant to be ineligible for the SBIR program, it places this information on its Web site but does not consistently identify that the ineligibility determination was made for the SBIR program. |
We assessed the GCSS-Army schedule that supported DOD’s December 2012 full deployment decision using the GAO Schedule Guide to determine whether it was comprehensive, well-constructed, credible, and controlled. To assess the schedule, we obtained and reviewed documentation, including the integrated master plan, work breakdown structure, and statement of work. To assess the program’s cost estimate, we used the GAO Cost Guide to evaluate the GCSS-Army Program Management Office’s estimating methodologies, assumptions, and results to determine whether the cost estimate was comprehensive, well-documented, accurate, and credible. We obtained and reviewed documentation, including the program office estimate, software cost model, independent cost estimate, and risk and uncertainty analysis. We also met with key program officials, such as the program manager, lead schedulers, and cost estimators to present the preliminary results of our assessment of the program’s schedule and cost estimates best practices and obtained explanations and clarifications. We conducted this performance audit from October 2011 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GCSS-Army was initiated in December 2003 and is intended to provide all active Army, National Guard, and Army Reserve tactical units with the capability to track supplies, spare parts, and organizational equipment. The system is also intended to track unit maintenance, total cost of ownership, and other financial transactions related to logistics for all Army units—about 160,000 users. GCSS-Army is intended to integrate approximately 40,000 local supply and logistics databases into a single, enterprise-wide system. In December 2012, the Under Secretary of Defense for Acquisition, Technology and Logistics granted full deployment decision approval for GCSS-Army to be deployed to all remaining locations beyond the limited fielding locations of the life-cycle acquisition process. DOD officials reported that the GCSS-Army full deployment will be completed by the fourth quarter of fiscal year 2017. GCSS-Army program functionality is intended to be implemented across the Army in two waves—the first is to include two releases and is to provide supply (warehouse) and financial reporting capabilities, and the second is to include one release, which is to provide property book and maintenance capabilities. DOD has approved the funding for the Army to proceed with the deployment of the GCSS-Army functionality to all intended locations. This funding is approximately $3.7 billion. The Army reported that it had spent about $1.6 billion as of June 30, 2014. In October 2010, we reported that the Army did not fully follow best practices in developing a reliable schedule and cost estimate for implementing GCSS-Army. In particular, the Army had not developed a fully integrated master schedule that reflected all government and contractor activities and had not performed a sensitivity analysis for the cost estimate. We recommended that the Army develop an integrated master schedule that fully incorporated best practices, such as capturing all activities, sequencing all activities, integrating activities horizontally and vertically, establishing the critical path for all activities, and conducting a schedule risk analysis. In addition, we recommended that the Army update the cost estimate by using actual costs and preparing a sensitivity analysis. DOD concurred with our recommendations, and this report provides the status of the department’s efforts to address our prior recommendations. In March 2009, we published the Cost Guide to address a gap in federal guidance about processes, procedures, and practices needed to ensure reliable cost estimates. The Cost Guide provides a consistent methodology based on best practices that can be used across the federal government to develop, manage, and evaluate capital program cost estimates. The methodology is a compilation of characteristics and associated best practices that federal cost estimating organizations and industry use to develop and maintain reliable cost estimates throughout the life of an acquisition program. In May 2012, we issued an exposure draft of the Schedule Guide as a companion to the Cost Guide. A consistent methodology for developing, managing, and evaluating capital program cost estimates includes the concept of scheduling the necessary work to a timeline, as discussed in the Cost Guide. Simply put, schedule variances are usually followed by cost variances. Because some program costs, such as labor, supervision, rented equipment, and facilities, cost more if the program takes longer, a reliable schedule can contribute to an understanding of the cost impact if the program does not finish on time. In addition, management tends to respond to schedule delays by adding more resources or authorizing overtime. Further, a schedule risk analysis allows for program management to account for the cost effects of schedule slippage when developing the life-cycle cost estimate. A cost estimate cannot be considered fully credible if it does not account for the cost effects of schedule slippage. We found that the program schedule and cost estimates for the GCSS- Army did not fully meet best practices. Specifically, the GCSS-Army schedule supporting the December 2012 full deployment decision partially met the comprehensiveness and construction characteristics and substantially met the credibility and control characteristics for developing a high-quality and reliable schedule. In addition, the cost estimate fully met the comprehensiveness characteristic, substantially met the documentation and accuracy characteristics, and partially met the credibility characteristic for developing a high-quality and reliable cost estimate. It is important that the schedule and cost estimates are continually updated throughout the program’s life cycle so that management has the best information available to make decisions. By incorporating best practices for developing reliable schedule and cost estimates, DOD would increase the probability of GCSS-Army successfully achieving full deployment by the fourth quarter of fiscal year 2017 to provide needed functionality for financial improvement and audit readiness. Our analysis found that the GCSS-Army program substantially met two and partially met the other two characteristics of a reliable schedule estimate and therefore did not provide the information needed to support the December 2012 full deployment decision (see table 1). Appendix I contains our detailed analysis of the GCSS-Army schedule estimate. The success of any program depends on having a reliable schedule of the program’s work activities that will occur, how long they will take, and how the activities are related to one another. As such, the schedule not only provides a road map for systematic execution of a program, but also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. Comprehensive. A schedule should reflect all activities as defined in the program’s work breakdown structure, including activities to be performed by the government and the contractor; the resources (e.g., labor, materials, and overhead) needed to complete each activity; and how long each activity will take. We found that the GCSS-Army schedule partially met the comprehensive characteristic. The schedule used to support the full deployment decision reflected all activities to be performed by both the government and contractor for the program. However, resources were not loaded into the schedule software and were not assigned to specific activities in the schedule. GCSS-Army program management officials told us that the contractor used a separate system outside the schedule to manage the resources needed for the program. Information on resource needs and availability in each work period assists the program office in forecasting the likelihood that activities will be completed as scheduled. If the current schedule does not allow insight into the current or projected allocation of resources, the risk of the program’s schedule slipping is significantly increased. Our analysis also determined that activity durations were not manageable and reasonably estimated in the schedule. We found that 30 percent of the remaining activities in the schedule exceeded the standard best practice for activity duration, which should be shorter than approximately 44 working days, or 2 working months. For example, audit support activities had durations over 100 working days. Durations should be as short as possible to facilitate the objective measurement of accomplished effort. If activities are too long, the schedule may not have enough detail for effective progress measurement and reporting. Well-constructed. A schedule should be planned so that critical project dates can be met. To meet this objective, all activities should be logically sequenced—that is, listed in the order in which they are to be carried out. In particular, activities that must finish prior to the start of other activities (i.e., predecessor activities), as well as activities that cannot begin until other activities are completed (i.e., successor activities), should be identified and their relationships established. The schedule should identify the project’s critical path. Establishing a valid critical path is necessary for examining the effects of any activity slipping along this path. The calculation of a critical path determines which activities drive the project’s earliest completion date. The schedule should also identify total float so that the schedule’s flexibility can be accurately determined. We found that the GCSS-Army schedule was partially well-constructed. The majority of logic used to sequence the activities within the schedule was generally error free, clearly indicating to program management the order of activities that must be accomplished. However, the schedule’s critical path was not valid because it included level of effort activities and date constraints. Level of effort activities, such as program management, should not define the critical path because they are nondiscrete support activities that do not produce a definite end product; therefore, level of effort activities cannot determine the length of the project. In addition, date constraints prevent the critical path from being a continuous sequence of events from the current to finish dates of the project. Rather than relying on such constraints, the schedule should use logic and durations in order to reflect realistic start and completion dates for activities. Successfully identifying the critical path relies on several factors, such as capturing all activities; properly sequencing activities; and assigning resources, which, as noted earlier, had not been completely done. Without a valid critical path, management cannot focus on activities that will have detrimental effects on the key project milestones and deliverables if they slip. Further, our analysis found that 28 percent of remaining schedule activities had more than 100 working days of total float, meaning that those activities could slip almost 5 working months and not affect the estimated finish date of the program. Based on the remaining duration of the program, 100 working days of float would not appear to be reasonable. The GCSS-Army Program Management Office stated that total float was not reliable at the time of the full deployment decision because the schedule was being updated to reflect a modification to the system. Without accurate values of total float for a program activity, management cannot determine the flexibility of tasks and therefore cannot properly reallocate resources from tasks that can safely slip to tasks that cannot slip without adversely affecting the estimated program completion date. Credible. A schedule should be horizontally and vertically integrated. A horizontally integrated schedule links products and outcomes with other associated sequenced activities, which helps verify that activities are arranged in the right order to achieve aggregated products or outcomes. A vertically integrated schedule ensures that the start and completion dates for activities are aligned with such dates on subsidiary schedules supporting tasks and subtasks. A schedule risk analysis should also be performed using statistical techniques to predict the level of confidence in meeting a program’s completion date. We found that the GCSS-Army schedule was substantially credible. The schedule was substantially horizontally integrated, which means that outcomes were aligned with sequenced activities. The schedule was also substantially vertically integrated; we were able to trace varying levels of activities and supporting subactivities. Such mapping or alignment among subsidiary schedules enables different groups—such as government teams and contractors—to work to the same master schedule, and provides assurance that the representation of the schedule to different audiences is consistent and accurate. However, our analysis found that a schedule risk analysis had not been fully conducted. GCSS-Army program management officials provided documentation for a schedule risk analysis, but we noted that risk analyses were not performed for all supporting activities because program management officials stated that the program fielding schedule was not finalized at the time of the full deployment decision. If a schedule risk analysis is not conducted, program management cannot determine (1) the likelihood that the project completion date will occur, (2) how much schedule risk contingency is needed to provide an acceptable level of certainty for completion by a specific date, (3) risks most likely to delay the project, (4) how much contingency reserve each risk requires, and (5) the activities that are most likely to delay the project. Controlled. A schedule should be continually updated using logic, durations, and actual progress to realistically forecast dates for program activities. A schedule narrative should accompany the updated schedule to provide decision makers and auditors a log of changes and their effect, if any, on the schedule time frame. The schedule should be analyzed continually for variances to determine when forecasted completion dates differ from planned dates. This analysis is especially important for those variations that affect activities identified as being in a program’s critical path and that can affect a scheduled completion date. A baseline schedule should be used to manage the program scope, the time period for accomplishing it, and the required resources. We found that the GCSS-Army schedule was substantially controlled. GCSS-Army program management officials stated that they met weekly to discuss proposed schedule changes and update the schedule’s progress, and management also prepared a schedule narrative document that contained a list of custom fields and assumptions. In addition, we found no anomalies throughout the schedule (e.g., activities with planned start dates scheduled to occur in the past and activities with actual finish dates scheduled to occur in the future). However, we found that there was not a documented baseline schedule to measure program performance against, which would allow management to monitor any schedule variances that affect the completion of work. Without a formally established baseline schedule to measure performance against, management cannot identify or mitigate the effect of unfavorable performance. In our October 2010 report, we recommended that the Army develop an integrated master schedule that fully incorporated best practices, such as capturing all activities, sequencing all activities, integrating activities horizontally and vertically, establishing the critical path for all activities, and conducting a schedule risk analysis. The Army’s December 2012 GCSS-Army schedule used to support the full deployment decision addressed several of the best practices that were an issue in our prior report, including capturing all activities, sequencing all activities, and integrating activities horizontally and vertically. However, as discussed, we continued to identify several best practices that were not yet fully addressed and also identified several new areas where the 2012 schedule did not incorporate best practices, such as activity durations and baseline schedule. Although GCSS-Army is in full deployment, without fully addressing best practices for scheduling, program managers will not have the best information available to make decisions related to issues such as the sequencing of activities and the flexibility of the schedule according to available resources. We found that the GCSS-Army program fully met one, substantially met two, and partially met one of the characteristics of a reliable cost estimate and therefore did not provide the information needed to support the full deployment decision, as shown in table 2. Appendix II contains our detailed analysis of the GCSS-Army cost estimate. A reliable cost estimate is critical to the success of any program and is updated continually throughout its life cycle. Such an estimate provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction when warranted, and accountability for results. Comprehensive. A cost estimate should include costs of the program over its full life cycle, provide a level of detail appropriate to ensure that cost elements are neither omitted nor double-counted, and document all cost- influencing ground rules and assumptions. The cost estimate should also completely define the program and be technically reasonable. We found that the cost estimate for GCSS-Army was fully comprehensive. The cost estimate included both government and contractor costs of the program over its life cycle—from the inception of the program through design, development, deployment, and operation and maintenance. The cost estimate also included an appropriate level of detail, which provided assurance that cost elements were neither omitted nor double-counted, and included documentation of all cost-influencing ground rules and assumptions. The cost estimate documentation included the purpose of the cost estimate, a technical description of the program, and technical risks (e.g., the resolution for any identified deficiencies). Well-documented. A cost estimate should be supported by detailed documentation that describes how it was derived and how the expected funding will be spent in order to achieve a given objective. The documentation should capture such things as the source data used, the calculations performed, the results of the calculations, the estimating methodology used to derive each work breakdown structure element’s cost, and evidence that the estimate was approved by management. The documentation should discuss the technical baseline description, and the data in the technical baseline should be consistent with the cost estimate. We found that the cost estimate for GCSS-Army was substantially well- documented. The cost estimate captured such things as the calculations performed to derive each element’s cost and the results of the calculations. The documentation also included a technical baseline description that provided data consistent with the cost estimate. Further, the GCSS-Army Program Management Office presented evidence of receiving approval of the estimate through briefings to management. Although program management officials did not provide us with written documentation of the source data, the Office of the Deputy Assistant Secretary of the Army for Cost and Economics (DASA-CE) did provide us with a full deployment decision briefing, which showed each major cost element and listed the methodology and sources of the data. However, the briefing documents included a limited amount of the actual source data, and we could not determine their reliability. Without sufficient background information about the source data and reliability of the data, the GCSS-Army cost estimator cannot know with any confidence whether the data collected can be used directly or need to be modified before use in the cost estimate. Accurate. A cost estimate should provide for results that are unbiased, are not overly conservative or optimistic, and contain no major mistakes. A cost estimate should be based on an assessment of most likely costs (adjusted properly for inflation), updated to reflect significant changes and grounded in a historical record of cost estimating and actual experiences on other comparable programs. In addition, variances between planned and actual costs should be documented, explained, and reviewed, and estimating techniques for each cost element should be used appropriately. We found that the cost estimate for GCSS-Army was substantially accurate. The GCSS-Army cost model detailed the inflation indexes and properly applied the indexes to each relevant cost element and included time phasing of the costs. The GCSS-Army cost model did not include any major mistakes, and all its cost elements summed up properly and were consistent with the cost estimate. In addition, the estimating techniques (i.e., engineering build-up) used to create the estimate were used appropriately. The cost model documentation did not explain whether the cost estimate was updated to reflect changes in technical or program assumptions. The program management officials provided documentation that reflected the technical changes for the major deployment decisions, but the documentation did not include details on how the costs were updated. Unless such documentation is available to verify that the cost estimate is properly updated on a regular basis, management will not have reasonable assurance that the cost estimate provides accurate information to make informed decisions about the program. Credible. A cost estimate should discuss any limitations of the analysis because of uncertainty or biases surrounding data or assumptions. The cost estimate should include a sensitivity analysis that identifies a range of possible costs based on varying major assumptions and data. A risk and uncertainty analysis should be conducted to determine the level of risk associated with the cost estimate and identify the effects of changing key cost driver assumptions and factors. In addition, the estimate’s results should be cross-checked and reconciled to an independent cost estimate to determine whether other estimating methods produce similar results. We found that the cost estimate was partially credible. The Army Cost Review Board developed an independent cost estimate that was reconciled to the program management officials’ cost estimate. The program management officials’ cost estimate mentioned results of a risk analysis; however the risk and uncertainty analysis was not documented. Further, since the cost estimate that was provided discussed risk only at a summary level, it is unclear how management considered risk related to the program. Without a fully documented risk and uncertainty analysis, the estimate will lose credibility and management’s decision-making ability will be impaired because it will not know the level of confidence associated with achieving the cost estimate. In addition, program management officials provided a cost estimate that identified major cost drivers, including system deployment and training. The cost estimate documentation contained a reference that a sensitivity analysis was completed on these cost drivers, but results of this analysis were not documented. As a result, the GCSS-Army cost estimator will not have a clear understanding of how each major cost driver is affected by a change in a single assumption and thus which cost driver most affects the cost estimate. Further, GCSS-Army program officials provided us with one example of evidence that indicated that some cross-checking was performed using cost models; however, the results of this cross-checking were not documented. The purpose of cross-checking is to determine whether alternative methods would produce similar results, which would increase the credibility of the estimate. In our October 2010 report, we recommended that the Army update the GCSS-Army cost estimate by using actual costs and preparing a sensitivity analysis. For the 2012 cost estimate, we found that the Army had made progress, but we continued to identify deficiencies in documentation related to the sensitivity analysis, risk and uncertainty analysis, and cross-checking of major cost elements for reasonableness. While the Army made some improvements to the schedule and cost estimates that supported the full deployment decision, the Army did not fully meet best practices in developing schedule and cost estimates for the GCSS-Army program. The Army made progress in incorporating schedule best practices, such as capturing and sequencing all activities and integrating activities horizontally and vertically, but we identified other deficiencies in schedule and cost best practices. For example, GCSS- Army did not meet best practices related to schedule durations, a valid critical path, and a cost sensitivity analysis. It is critical to correct the deficiencies identified with the schedule and cost estimates to help ensure that the projected spending for this program is being used in the most efficient and effective manner. By incorporating best practices for developing reliable schedule and cost estimates, DOD would increase the probability of GCSS-Army successfully achieving full deployment by the fourth quarter of fiscal year 2017 to provide needed functionality for financial improvement and audit readiness. To help improve the implementation of GCSS-Army, we recommend that the Secretary of the Army take the following two actions: Ensure that the Under Secretary of the Army, in his capacity as the Chief Management Officer, directs the GCSS-Army Program Management Office to develop an updated schedule that fully incorporates best practices, including assigning resources to all activities, establishing durations of all activities, confirming that the critical path is valid, and ensuring reasonable total float. Ensure that the Under Secretary of the Army, in his capacity as the Chief Management Officer, directs the GCSS-Army Program Management Office to update the cost estimate to fully incorporate best practices by documenting the results of a risk and uncertainty analysis, the cross-checking of major cost elements to see if results are similar, and a sensitivity analysis. We provided a draft of this report to DOD for review and comment. In its written comments, reprinted in appendix III, DOD concurred with our recommendation to update the schedule to fully incorporate best practices and described planned and ongoing actions that the department is taking to address the recommendation. In particular, DOD indicated that the Army has taken steps to help ensure that (1) all activities are assigned resources in the schedule software, (2) all schedule activities with long durations have been detailed, (3) level of effort activities and date constraints have been removed from the schedule so that they do not define the critical path, and (4) the majority of the schedule activities associated with high total float have been removed. If effectively implemented, these actions should address the intent of our recommendation. DOD also concurred with our recommendation to update the cost estimate to fully incorporate best practices by documenting the results of a risk and uncertainty analysis, the cross-checking of major cost elements to see if results are similar, and a sensitivity analysis. DOD described completed actions that the department has taken to address the recommendation. DOD stated that GCSS-Army achieved Milestone C in August 2011 and a full deployment decision in December 2012, and that it prepared a cost estimate per DOD acquisition rules and guidelines. DOD also stated that the Army (1) followed all Army directed best practices and approvals from the Office of the Deputy Assistant Secretary of the Army for Cost and Economics and (2) prepared a sensitivity analysis, a risk analysis, and cross-checked major cost elements for similar results, but that those documented analyses and results were not included in the formal cost estimates as directed by the Army. DOD commented that these documented analyses and results are part of the formal working papers and were provided to GAO in February 2013. However, these actions do not fully address the intent of our recommendation. As stated in our report, we focused on the extent to which GCSS-Army’s schedule and cost estimates were prepared consistent with GAO’s Schedule and Cost Guides. We reviewed the cost estimate documentation provided by the Army in February 2013 and additional information provided in February 2014 and determined that the documentation did not fully meet best practices for a risk and uncertainty analysis, a sensitivity analysis, and cross-checking of major cost elements for similar results. As stated in our report, GCSS-Army program management officials provided a cost estimate that mentioned the results of a risk and uncertainty analysis, and contained a reference that a sensitivity analysis was completed. Also, GCSS-Army program management officials provided us with one example of evidence that indicated some cross-checking was performed using cost models. However, the results of the risk and uncertainty and sensitivity analyses, as well as the cross-checking were not documented consistent with best practices. As stated in our report, incorporating best practices for a reliable cost estimate would help ensure that DOD has a reliable cost estimate that provides the basis for effective resource allocation, proactive course correction when warranted, and accountability for results. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Army; the Assistant Secretary of Defense (Acquisition); the Acting Deputy Chief Management Officer; the Under Secretary of Defense (Comptroller); the Under Secretary of the Army, in his capacity as the Chief Management Officer of the Army; and the Program Manager for GCSS- Army. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Asif A. Khan at (202) 512-9869 or [email protected] or Nabajyoti Barkakati at (202) 512-4499 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. This appendix provides the results of our analysis of the extent to which the Global Combat Support System-Army schedule supporting the December 2012 full deployment decision met the characteristics of a high-quality, reliable schedule. Table 3 provides the detailed results of our analysis. GAO’s methodology includes five levels of compliance with its best practices. “Not met” means the program provided no evidence that satisfies any of the criterion. “Minimally met” means the program provided evidence that satisfies a small portion of the criterion. “Partially met” means the program provided evidence that satisfies about half of the criterion. “Substantially met” means the program provided evidence that satisfies a large portion of the criterion. “Fully met” means the program provided evidence that completely satisfies the criterion. This appendix provides the results of our analysis of the extent to which the Global Combat Support System-Army cost estimate supporting the December 2012 full deployment decision met the characteristics of a high-quality cost estimate. Table 4 provides the detailed results of our analysis. GAO’s methodology includes five levels of compliance with its best practices. “Not met” means the program provided no evidence that satisfies any of the criterion. “Minimally met” means the program provided evidence that satisfies a small portion of the criterion. “Partially met” means the program provided evidence that satisfies about half of the criterion. “Substantially met” means the program provided evidence that satisfies a large portion of the criterion. “Fully met” means the program provided evidence that completely satisfies the criterion. In addition to the contacts named above, Arkelga Braxton (Assistant Director), Karen Richey (Assistant Director), Beatrice Alff, Tisha Derricotte, Jennifer Echard, Emile Ettedgui, Patrick Frey, and Jason Lee made key contributions to this report. | DOD officials have stated that the implementation of enterprise resource planning systems, such as GCSS-Army, is critical to the department's goal of correcting financial management deficiencies and ensuring that its financial statements are validated as audit ready by September 30, 2017, as called for by the National Defense Authorization Act for Fiscal Year 2010. GAO was asked to review the schedule and cost estimates for selected DOD systems. This report addresses the extent to which the schedule and cost estimates for GCSS-Army were prepared consistent with GAO's Schedule and Cost Guides. The schedule and cost estimates are designed to cover GCSS-Army implementation through 2017. GAO assessed the schedule and cost estimates that supported DOD's December 2012 full deployment decision, which granted approval for GCSS-Army to be deployed for operational use to all remaining locations. GAO also met with GCSS-Army program officials, including lead schedulers and cost estimators. The Army made some improvements to its schedule and cost estimates that supported the December 2012 full deployment decision for the Global Combat Support System-Army (GCSS-Army); however, the schedule and cost estimates did not fully meet best practices. GAO found that the schedule substantially met the credibility and control characteristics for developing a high-quality and reliable schedule. For example, the schedule was horizontally integrated, which means that it links products and outcomes with other associated sequenced activities. In addition, the GCSS-Army program management officials followed general guidelines for updating the schedule on a regular basis. GAO found that the schedule partially met the comprehensiveness and construction characteristics for a reliable schedule. Specifically, resources were not assigned to specific activities, and the schedule lacked a valid critical path, preventing management from focusing on the activities most likely to have detrimental effects on key program milestones if not completed as planned. By incorporating best practices for developing a reliable schedule, the Department of Defense (DOD) would increase the probability of completing the GCSS-Army program by the projected date. GAO found that the GCSS-Army cost estimate fully or substantially met the comprehensiveness, documentation, and accuracy characteristics of a high-quality and reliable cost estimate. For example, the cost estimate included both government and contractor costs for the program over its life cycle, provided documentation that substantially described detailed calculations used to derive each element's cost, and was adjusted for inflation. In addition, GAO found that the cost estimate partially met the credibility characteristic of a reliable cost estimate. Although program management officials provided a cost model that discussed a limited risk analysis, the results of the risk and uncertainty analysis were not documented. Incorporating best practices would help ensure that DOD has a reliable cost estimate that provides the basis for effective resource allocation, proactive course correction when warranted, and accountability for results. GAO is making two recommendations aimed at improving the Army's implementation of schedule and cost best practices for GCSS-Army. DOD concurred, but the completed actions it described related to the cost estimate were not fully responsive to GAO's recommendation. GAO continues to believe that fully incorporating best practices in the cost estimate would help improve its reliability. |
The federal government has taken a number of steps to combat threats posed by drug cartels, including potential crime and violence directed against U.S. citizens and government interests. For example, in 2008, the U.S. government began a program—known as the Mérida Initiative—to provide Mexico and the countries of Central America with financial and technical assistance for counterdrug efforts, among others. In March 2009, as a response to the violence in Mexico, DHS announced a new southwest border initiative to guard against violent crime spillover into the United States by increasing the deployment of personnel and technology along the southwest border. In addition, in June 2009, the Office of National Drug Control Policy issued the National Southwest Border Counternarcotics Strategy with the goal to substantially reduce the flow of illicit drugs, drug proceeds, and associated instruments of violence across the southwest border.disrupting and dismantling drug-trafficking organizations along the To accomplish this goal, the strategy listed southwest border as one of its key objectives. In August 2010, President Barack Obama signed an emergency supplemental appropriation for border security, which included $600 million in supplemental funds for enhanced border protection and law enforcement activities. The President also separately authorized the temporary deployment of up to an additional 1,200 National Guard troops to the border to assist law enforcement agencies in their efforts to target illicit networks’ trafficking in people, drugs, illegal weapons, and money, and the violence associated with these illegal activities. Moreover, in May 2011, DHS Secretary Napolitano stated that CBP, in partnership with independent third-party stakeholders, had begun the process of developing an index to comprehensively and systematically measure security along the southwest border and quality of life in the region. As we reported in May 2012, this index—the Border Condition Index—is being developed, and accordingly, it is too early to determine how it will be used to provide oversight of border security efforts. At the federal level, five agencies in two departments are responsible for securing the border and combating drug cartel–related activities along the southwest border. These agencies enforce federal laws related to, among other things, immigration, drugs, weapons, and organized crime. Additionally, they collect data related to their criminal investigations and operations to support prosecutions. Specifically, they track violations of federal criminal statutes relevant to their responsibilities, including the number of pending and closed cases, arrests, convictions, indictments, seizures, and forfeitures. Table 1 presents information on these law enforcement agencies and their responsibilities. In addition to enforcing laws, a number of agencies have intelligence components and oversee interagency task forces responsible for collecting, analyzing, and disseminating information related to threats from the drug cartels. These components include DHS’s Office of Intelligence and Analysis and intelligence offices within CBP and U.S. Immigration and Customs Enforcement (ICE), as well as DOJ’s DEA, and the FBI. These entities produce various intelligence products, such as threat assessments, related to Mexican drug cartel-related activities in support of law enforcement operations. Also, the Office of National Drug Control Policy, in the Executive Office of the President, is responsible for coordinating the national drug control effort, and designates areas within the United Sates that are significant centers of illegal drug production, manufacturing, importation, or distribution as High Intensity Drug Trafficking Areas. Law enforcement agencies in these designated areas collect and share intelligence and coordinate interagency task forces to target drug-trafficking operations. At the state and local levels, sheriffs’ offices and municipal police departments are responsible for investigating and tracking crime occurring in their jurisdictions, based on the laws of their respective states. If the investigation determines that the criminal violation falls under federal purview, such as an immigration violation, the local law enforcement agency may refer the case to the appropriate federal agency and might not track such cases in its records. The Departments of Public Safety in Arizona, New Mexico, and Texas, and the state Department of Justice in California, are responsible for overseeing the process for collecting, validating, and publishing crime data from local agencies. These agencies voluntarily submit crime data to the FBI, which is responsible for publishing and archiving national crime statistics. The FBI oversees the UCR Program, the federal government’s centralized repository for crime data. The UCR Program provides a nationwide view of crime, and is based on the voluntary submission of a variety of statistics by city, county, and state law enforcement agencies. Begun in 1930, the UCR Program established a system to collect summary data, known as SRS data, and now contains 8 types of violent and property crimes, referred to as Part I offenses, that are reported to law enforcement agencies. Violent crimes are composed of murder and nonnegligent manslaughter, forcible rape, robbery, and aggravated assault. Property crimes are composed of burglary, larceny-theft, motor vehicle theft, and arson. If multiple offenses are reported for an individual crime incident, only the highest-level offense is recorded. Offense data submitted to the FBI by local law enforcement agencies show the aggregate counts for reported crimes and arrests for the 8 Part I offenses and aggregate counts on arrests made for 21 other offenses, such as embezzlement, prostitution, and drug abuse violations. These UCR data can be used to measure fluctuations in the type and volume of crime for specific offenses in a particular jurisdiction for which they have been collected. The FBI reported that 18,233 law enforcement agencies in the United States, representing 97.8 percent of the U.S. population, submitted UCR data in 2011. As of November 2012, law enforcement agencies in 46 states and the District of Columbia were submitting UCR data through a state UCR Program, or a district system in the case of the District of Columbia. In the remaining 4 states, local law enforcement agencies submit UCR data directly to the FBI. State programs are to conform to national UCR Program standards, definitions, and quality control procedures in order for their data to be submitted to the FBI. The FBI is to help state UCR Programs meet these requirements by, among other actions, reviewing and editing data submitted by individual agencies and providing technical assistance on reporting procedures. To meet the needs of the law enforcement community for more detailed crime data, the FBI introduced NIBRS in 1988 with the intent that local law enforcement agencies will transition from the SRS to NIBRS at their own pace. NIBRS collects data on more types of offenses than the traditional SRS and includes details on individual incidents, such as information on offenders, victims, property, and whether multiple offenses are reported in an individual crime incident. NIBRS collects offense and arrest data on 46 specific crimes grouped in 22 offense categories, which include 8 Part I offenses and other offenses, such as extortion and kidnapping. In addition, NIBRS collects arrest data for 10 other crimes, such as trespassing and driving under the influence. The data can be used to examine linkages among offenses, offenders, victims, property, and arrestees. Tables that list offenses collected for the UCR SRS and the NIBRS programs and summarize the main differences between the two crime data systems can be found in appendix III. NIBRS allows local law enforcement agencies to report a wider range of offenses and arrests. However, the FBI reported that, as of 2011, 7,819 law enforcement agencies, representing 28 percent of the U.S. population, contributed NIBRS data to the UCR Program. According to senior FBI officials, because of the voluntary nature of the UCR Program, implementation of the NIBRS occurs at the pace commensurate with the resources, abilities, and limitations of the contributing law enforcement agency. Since participation in the program is limited, the FBI converts NIBRS data submitted by law enforcement agencies to the format for the SRS data system. UCR SRS data provide the best available information on crime levels and crime trends in southwest border counties. Our interviews with officials from 33 of the 36 local law enforcement agencies in the southwest border counties determined that SRS data are the only crime data that those agencies collect in a systematic way—that is, in an automated form that can be readily retrieved and analyzed. Our analysis determined that the remaining 3 local law enforcement agencies also systematically collect SRS data, but we do not know if they also systematically collect other crime data because these agencies were not available to participate in our interviews. The sheriff’s office in Yuma County, Arizona, is the only southwest border law enforcement agency that collects NIBRS data. The UCR data cannot be used to draw conclusions about the extent to which crimes are attributable to spillover from Mexico. The SRS does not collect data on all types of crimes committed in the United States that have been associated with Mexican drug-trafficking organizations, such as particular types of kidnappings or home invasions. Further, the SRS does not collect enough information, such as a motivation for committing a crime, to identify a link between violent or property crime rates and crimes associated with spillover from Mexico, such as drug trafficking. Because of its summary nature, the SRS does not provide data about individual crime incidents, including details on offenses, arrests, victim/offender relationships, or whether multiple offenses occurred in an individual crime incident. In addition, UCR data might also underreport the actual amount of crime that has occurred, since not all crimes are reported to law enforcement. For example, law enforcement officials with whom we spoke stated that individuals who may have been assaulted or robbed in the course of drug trafficking and other illicit activities are hesitant to report their involvement to the police. Moreover, senior FBI officials stated that NIBRS data, although more comprehensive than SRS data, also might not include sufficient detail to provide information on spillover crime even if they were more widely available. Cognizant of these limitations, we analyzed SRS crime data to calculate violent and property crime rates for both border and nonborder counties in the four southwest border states: Arizona, California, New Mexico, and Texas. Our analyses of SRS data for border and nonborder counties showed that in all four states, both violent and property crime rates per 100,000 population were generally lower in 2011 than in 2004. Figure 1 shows the changes in crime rates from 2004 through 2011 for southwest border and nonborder counties. (Detailed data for fig.1 can be found in app. IV.) Mouse over the state or county names to find out more information about border crime statistics. With respect to violent crimes, as shown in figure 1, The violent crime rate was lower in border counties than nonborder counties for three of the four southwest border states. Comparing all border counties combined with all nonborder counties combined within each state, the violent crime rate in California and Texas border counties was lower than in nonborder counties each year from 2004 through 2011, and lower in New Mexico border counties each year from 2005 through 2011. In contrast, the violent crime rate in Arizona border counties was higher than in nonborder counties from 2004 to 2011. The violent crime rate declined over time in both border and nonborder counties across all southwest border states. Comparing 2011 with 2004, the violent crime rate in border counties in 2011 was lower by 33 percent in Arizona, 26 percent in California, and 30 percent in Texas. In nonborder counties, the decrease was 22 percent, 25 percent, and 24 percent, respectively. The violent crime rate in border counties in New Mexico was lower by 8 percent in 2011 than in 2005, and in nonborder counties the decrease was 19 percent. With two exceptions, the violent crime rate was lower over time in large border counties across the southwest border states. The violent crime rate in 2011 was lower than in 2004 in 10 of 12 large border counties in Arizona, California, and Texas with sufficiently complete data for analysis. The violent crime rate in Dona Ana County, New Mexico, was lower in 2011 than in 2005. Additionally, across all 7 small border counties with sufficiently complete data for analysis, the total number of violent crimes for these counties in 2011 was also lower than in 2004. With respect to property crimes, as shown in figure 1, The property crime rate in border counties was either lower or similar to the rate in nonborder counties in three of the four southwest border states. Comparing all border counties combined with all nonborder counties combined within each state, the property crime rate in California border counties was lower than the rate in nonborder counties each year from 2009 through 2011. Each year from 2004 through 2008, the crime rate in California border and nonborder counties was similar. The rate in Texas border counties was similar to the rate in nonborder counties each year from 2004 through 2011. The rate in New Mexico border counties was lower than in nonborder counties in all years, 2005 through 2011. The property crime rate declined over time in both border and nonborder counties in three of the four southwest border states. Comparing 2011 with 2004, the property crime rate in border counties in 2011 was lower by 35 percent in California and 28 percent in Texas. In nonborder counties, the decrease was 23 percent and 22 percent, respectively. The property crime rate in border counties in New Mexico was lower by 7 percent in 2011 than in 2005, and in nonborder counties the decrease was 18 percent. The property crime rate was lower over time in large border counties across the southwest border states. The property crime rate in 2011 was lower than in 2004 in all 11 large border counties in Arizona, California, and Texas with sufficiently complete data for analysis. The property crime rate in Dona Ana County, New Mexico, was lower in 2011 than in 2005. Additionally, across all 7 small border counties with sufficiently complete data for analysis, the total number of property crimes for these counties in 2011 was also lower than in 2004. Comparing UCR SRS and NIBRS data for the Yuma County sheriff’s office, we found comparable decreases in violent crimes. Specifically, we found that the total number of violent crimes reported through NIBRS was 32 percent lower in 2010 than in 2007, when the office began reporting NIBRS data. The number of violent crimes reported in the SRS format was 33 percent lower in 2010 than in 2007. (Additional detail on our analysis results is presented in app. V.) Local law enforcement officials with whom we spoke provided a range of factors that they thought contributed to declining violent and property crime rates, including increased law enforcement presence, either federal, local or a combination of both, and new infrastructure, such as a border fence. Federal law enforcement agencies have few efforts under way to track what might be considered to be spillover crime, including violence, for several reasons. First, while several federal components established a definition of spillover crime, there is no common government definition of such crime. For example, in 2009, the DEA reported that U.S. intelligence and law enforcement agencies agreed to define spillover violence as deliberate, planned attacks by drug cartels on U.S. assets, including people and institutions. This definition does not include trafficker-on- trafficker violence. On the other hand, according to officials from DHS’s Office of Intelligence and Analysis, also in 2009, in partnership with other intelligence agencies, DHS developed definitions of spillover violence that include violence in the United States directed by Mexican drug cartels and violence committed by cartel members or their associates against each other. Second, DHS and DOJ components, including those that have a formal definition of spillover crime, either do not collect data for the purposes of tracking spillover crime, or do not maintain such data in an automated format that can be readily retrieved and analyzed. However, officials from Arizona and Rio Grande Valley Border Enforcement Security Task Forcesmultiagency teams led by DHS’s ICE to combat cross-border criminal activitystated that while data are not tracked systematically, teams maintain information on violent activities related to drug and human smuggling they identify during the course of their investigations. Teams use this information, which includes home invasions, assaults on individuals during illegal border crossings, and robberies of drug traffickers, to inform their assessments of violent trends along the U.S.-Mexico border. In addition, the Executive Committee for Southwest Border Intelligence and Information Sharing, cochaired by the DHS Office of Intelligence and Analysis and Texas Department of Public Safety, has been working since April 2012 to propose new terms and definitions for various facets of border-related crime and violence and identify new metrics and indicators to measure such crime. The committee plans to complete this effort in March 2013. CBP reported that while it does not specifically define spillover crime, it does collect and maintain automated, retrievable data on assaults against Border Patrol agents and officers at ports of entry. CBP recognizes that these data do not directly measure the extent of spillover crime but may serve as an indirect indicator of such crime. With respect to Border Patrol agents, CBP maintains data on physical assaults, assaults with a vehicle, assaults with weapons, assaults with rocks, and assaults with instruments other than rocks. CBP data show that the total number of assaults against Border Patrol agents in southwest border sectors in fiscal year 2012 (549) was about 25 percent lower than in fiscal year 2006 (729). Generally, assaults increased from 2006 (729) through 2008 (1,085), decreased slightly from 2008 (1,085) through 2010 (1,049), and decreased sharply from 2010 (1,049) through 2012 (549). (See fig 2.) In each fiscal year from 2006 through 2011, there were more rockings— defined as thrown rocks, for example by drug or human smugglers, at Border Patrol agents with the intent of threatening or inflicting physical harm—than all other assaults combined in Border Patrol sectors along the southwest border. In 2012, when the number of rockings was at a 7- year low, there were 51 fewer rockings than all other assaults. While the total number of assaults for all sectors combined in 2012 is smaller than in 2006, certain southwest border sectors show an increase in the number of all assaults other than rockings in 2012 from 2006. For example, the Tucson sector experienced 91 such assaults in 2012 compared with 76 in 2006, and the Rio Grande Valley sector experienced 77 such assaults compared with 41 in 2006. (Additional analysis of assault trends for fiscal years 2006 through 2012 by Border Patrol sector is presented in appendix VI.) CBP officials cited several factors that could affect a change in the number of assaults against Border Patrol agents, including changes in the level of illegal activity crossing the border, as well as changes in Border Patrol presence along the border. Also, CBP officials reported that from September 2004 through November 2012, 3 out of 22 Border Patrol agent deaths on the southwest border had a nexus to cross-border crime, while the remaining deaths mostly resulted from vehicular accidents or health issues. With respect to officers at ports of entry, CBP maintains data on physical assaults, assaults with a vehicle, and assaults with a weapon. For the 2 fiscal years that CBP has reliable data, the data show that assaults against officers at southwest border ports of entry declined from 37 assaults in fiscal year 2011 to 26 assaults in fiscal year 2012. In addition, the FBI reported that its Latin American Southwest Border Threat Section—created to focus on issues specifically related to drug cartels—began in fiscal year 2010 to classify incidents of violent crime with links to Mexico, including kidnappings of American citizens and non- terrorism-related hostage taking occurring in or having a substantial nexus to Mexico or Central and South America. According to the FBI, under the new classifications, from October 2009 through September 2012, it investigated and closed five cases involving kidnappings of American citizens and five cases involving non-terrorism-related hostage taking. None of these cases occurred in the United States. FBI officials cautioned that drug cartel related crimes, such as kidnappings and home invasions, are highly underreported and are not captured in national crime statistics. Only 1 of the 37 state and local law enforcement agencies that we interviewedthe Texas Department of Public Safetystated that it tracks spillover crime. There are several reasons spillover crime is not more widely measured and tracked across these agencies. First, there is no common definition of spillover crime shared by the border law enforcement communities, and our interviews with border sheriffs and police officials indicated that their opinions on what types of incidents constitute spillover crime vary. For example, the Texas Border Sheriff’s Coalition defined spillover crime as any action on one side of the border that is the result of violence or the threat of violence that causes a reaction on the other side of the border, such as a law enforcement response, or an economic or social impact. The Luna County, New Mexico, sheriff’s office defined spillover crime as occurring when a person is injured by any means by an act along the border that has a direct nexus to Mexican drug-trafficking organizations. The Cochise County, Arizona, sheriff’s office defined spillover crime as any crime associated with cross-border trafficking. Officials from 27 out of 37 state and local law enforcement agencies stated that it would be at least somewhat useful to have a common definition of spillover crime, because it would establish types of activities that constitute spillover crime and allow agencies to track such crime, among other uses. However, officials from 22 of those 27 agencies also stated that accomplishing such a task might be challenging. The reasons cited included differences of opinion among border counties about what incidents represent spillover crime and differences in the missions and priorities of federal, state, and local law enforcement agencies. As discussed previously in this report, the Texas Department of Public Safety and the DHS Office of Intelligence and Analysis are leading an effort by select state and local law enforcement agencies to propose new terms and definitions and identify metrics for various facets of border-related crime and violence by March 2013. Second, no state or local law enforcement agency we interviewed in our review systematically collects data on what might be considered to be spillover crime in a way that can be used to analyze trends. Officials from the Texas Department of Public Safety, the single agency that said it tracks spillover crime, stated that the department collects data on crimes it considers to be related to spillover, such as murders, kidnappings, and shootings related to activities of the Mexican drug cartels. The department manages six intelligence centers along the border that, according to officials, rely on a variety of sources, including incident reports from sheriffs’ offices, news reports, and intelligence information from interagency task forces, to assess which incidents can be clearly linked to Mexico and determined to be spillover crime. However, officials stated that spillover incidents reported by the department cannot be used to analyze trends over time because they are not collected systematically and may be incomplete. For example, the incident reports can vary by sheriff’s office in terms of what is reported and how incidents are characterized. For example, we found in our interviews with Texas border sheriffs’ offices that each office may have different ways of capturing information on incidents and may consider different incidents to be related to spillover crime. While the Texas Department of Public safety is the only state or local law enforcement agency we interviewed that reported collecting data specifically on spillover crime, 6 out of 37 law enforcement agencies we spoke with stated that they collect information on cross-border and drug- related activities, which could be elements of spillover crime. Specifically, Officials from 3 sheriffs’ offices in Arizona and Texas and 1 police department in California stated their agencies collect information on incidents that involve aliens without lawful immigration status to track cross-border activity. However, the officials noted that the data are too general to determine whether a specific crime incident is attributable to spillover from Mexico. Officials from the Laredo, Texas, Police Department stated that since 2003, the department has tracked incidents of drug smuggling, human smuggling, and the types of weapons seized. According to officials, while the data contribute to intelligence necessary to determine whether a crime is cartel-related, the data do not contain sufficient detail to determine whether a specific crime incident is attributable to spillover from Mexico. Officials from the San Diego office of the California Highway Patrol stated that in 2012 their field office began tracking how often they respond to calls from CBP’s Office of Field Operations to investigate incidents at the port of entry. However, the officials noted that the data could not be a measure for spillover crime because the incident may not always result in a crime or an arrest and may not be related to cartel activity or involve Mexican nationals. Officials from 27 out of 37 state and local law enforcement agencies stated that it would be at least somewhat useful to collect spillover crime data. Some of the reasons given were that the data would enhance intelligence, identify trends, and assist the agencies in making decisions about deploying resources. In addition, some officials said that data on spillover crime could help agencies apply for grants. However, the majority also expressed concerns about the burden of collecting additional information. Specifically, officials from 22 out of 37 state and local agencies stated that they have limited technological, financial, and human resources to collect additional data. Officials from all of the DHS and DOJ components we interviewed stated that while they do not believe that spillover violence has been a significant problem, they expressed concerns about the potential for it to occur in the future because drug cartels employ increasingly violent methods to interact with rivals and law enforcement agencies in Mexico. Threat assessments conducted by DHS and DOJ during fiscal years 2006 through 2012 do not indicate that violence from Mexico spilled over the southwest border. For example, the assessments indicate that violent infighting among rival Mexican cartels has remained largely in Mexico, and crimes such as kidnappings and home invasion robberies directed against drug traffickers have remained largely isolated instances in U.S. border communities. However, DHS threat assessments have reported that the threat facing U.S. law enforcement personnel from drug- trafficking organizations has been increasing, as evidenced by more aggressive tactics used by drug-trafficking organizations and smugglers to confront or evade law enforcement. Examples of such tactics include ramming or impeding police vehicles, fleeing at high speeds, and carrying weapons. Officials from 37 state and local law enforcement agencies and four Chambers of Commerce we interviewed expressed varying concerns regarding the extent to which violent crime from Mexico spills into and potentially affects their border communities. Officials in 31 of the 37 state and local law enforcement agencies stated that they have not observed violent crime from Mexico regularly spilling into their counties; nonetheless, officials from 33 of the 37 agencies said they are at least somewhat concerned about the potential for spillover crime to occur. Officials noted that there is always potential for the high levels of violence in Mexico, such as organized murders and kidnappings for ransom, to spread to their border towns. A senior DEA official in the El Paso, Texas, region testified in March 2009 that the southwest border is the principal arrival zone for most illicit drugs smuggled into the United States and is also the predominant staging area for the drugs’ distribution throughout the country. Further, state and local law enforcement officials expressed concerns about safety threats to law enforcement officers and residents who might encounter drug and human smugglers transiting through border communities, and according to some officials, smugglers are increasingly aggressive in evading capture and likely to be armed. For example, a New Mexico sheriff stated that while there have not been any serious injuries, drug smugglers ram police vehicles to stop a pursuit or speed through residential neighborhoods to avoid capture. In addition, armed cartel members on the Mexican border sometimes engage in gunfights with rival smugglers returning from the United States. According to the sheriff, such activities could result in vehicular accidents or shootings at U.S. law enforcement officers. An Arizona sheriff stated that most of the violence the office sees involves trafficker-on-trafficker violence. For example, a crew of smugglers might steal drug or human cargo from other smugglers to sell it themselves. In addition to the potential for violence during the event, there is also a potential for violence because of retaliation for the stolen goods. Officials in a California police department stated that auto thefts have increased, and officials believe that an increasing proportion of these thefts are related to cartel activity as cars are stolen to transport drug loads to the final destination after being transported over the border. Examples of some crimes that local officials attributed to spillover from Mexico include the following: A border sheriff in Arizona stated that a rancher was most likely murdered in 2010 by a smuggler. Officials in a Texas police department stated that they investigated a murder in 2010 that they attributed to spillover crime. Investigators in the case determined that the victim was a cartel member and the perpetrator was from a rival cartel in Mexico and had crossed the border to assassinate the rival cartel member. Officials in a California police department stated that a vehicle in Mexico was engaged in a gunfight with the Mexican police and the vehicle crossed the border into the United States. A sheriff in a border county in Texas stated that the property crime rates in his county had increased in 2008 because over a series of months, a group of smugglers from Mexico were burglarizing houses on their way back to Mexico. They were eventually arrested and prosecuted. According to state and local law enforcement officials, many crimes associated with drug-trafficking threats are unreported, since in many instances, both the perpetrators and the victims may be involved in criminal activity, and the victim may not be in this country legally. Further, the sheriff of a rural county in Texas stated that while statistics indicate that there is little crime in his county, it may be because there are very few law enforcement officials or residents to confront or resist smugglers moving through the county, not because criminal activity is not occurring. Similarly, a sheriff from another rural county in Texas stated that he believes that an enhanced law enforcement presence in the Rio Grande Valley may force illicit activity toward his county because it is less populated than other counties and smugglers are less likely to be confronted there. Moreover, according to some local law enforcement officials, the levels of violent crime in Mexico can have effects on the border communities that are not captured in the crime statistics. The 2011 Arizona Counter Terrorism Information Center threat assessment stated that the southwest border violence, such as kidnappings and home invasions carried out by Mexican criminal organizations, and gang-related violence, present the most substantial threat to public safety in Arizona. While 33 of 37 law enforcement agencies expressed some concern about spillover crime, officials from 11 of the 37 agencies stated that they do not treat spillover crime differently than they would any other crime. In addition, an Arizona sheriff and a police official from the same county stated that they are not more concerned about spillover crime because their county has not experienced any incidents of kidnappings or extortion, which could be indicators that crime has spilled over from Mexico. In addition to concerns about crime and violence potentially spilling over from Mexico, local law enforcement officials provided a number of examples of how the violence in Mexico affects local communities: U.S. citizens that cross the border daily, such as for school or employment are vulnerable to extortion or recruitment by cartels. For example, police officials in a California border city stated that cartel members in Mexico have come into the United States to recruit gang members, and a sheriff in a county in New Mexico stated that in his county, 400 or more U.S. citizens live in Mexico but attend school in the United States. The students may be recruited or coerced to smuggle drugs into the United States on their way to school. A Texas sheriff stated that a local college was forced to close after bullets from a gunfight originating in Mexico hit the college dorm building. Cartels may target public officials and law enforcement for corruption. Specifically, we were told of cases from local law enforcement in both New Mexico and Arizona in which public officials had been corrupted by a Mexican cartel. Sheriff and police department officials in counties in Texas, Arizona, and New Mexico stated that cartel members may reside with their families in U.S. border communities because they are considered to be safe havens. An officer in one police department stated a concern that there is a potential for violent altercations in the United States between cartel members living in their community that represent rival Mexican cartels. In addition, we spoke with Chamber of Commerce officials in one Arizona and three Texas border counties, and they all stated that they have not seen spillover violence from Mexico, but that violence in Mexico has nonetheless negatively affected businesses in their border communities. Specifically, they said that violence in Mexico has resulted in a perception that border communities are not safe and this has hindered economic growth and tourism. For example, an official from a Chamber of Commerce in one Texas county stated that local universities and hospitals have difficulty recruiting students and staff. Additionally, Chamber of Commerce officials in all three Texas counties said that violence in Mexico and more delays and stricter searches at the border have impeded Mexican consumers’ ability to more easily cross the border and purchase goods and services from the local U.S. businesses. At the federal level, officials from DOJ and DHS and their components stated that they have undertaken a number of efforts, both individually and through interagency partnerships, related to drug smuggling and cartel activity with a focus on the southwest border; however, all but one of these efforts do not specifically target spillover crime. For example, the FBI created a Latin American Southwest Border Threat Section to focus on issues specifically related to drug cartels. Also, DHS issued Border Violence Protocols in 2006 that set out the steps that CBP and Mexican government personnel are to follow when reporting incidents of border violence, and further updated them in 2011 to enhance coordination between the U.S. and Mexican agencies. Moreover, interagency task forces provide a forum for federal, state, and local law enforcement agencies to, among other things, share information and conduct coordinated enforcement activities to combat drug smuggling and cartel activity. Additional details on these and other efforts are contained in appendix VII. DHS developed the Operations Plan for Southwest Border Violence in October 2008 to address the possibility that spillover crime, such as a significant violent and spontaneous event that results in unintended cascading effects spilling over the border, may exceed DHS’s assets to respond in those locations. This contingency plan describes the various roles and responsibilities that DHS components are to undertake to coordinate an agency-wide response to a variety of potential threats of violence that could arise along the southwest border, such as credible threats against U.S. facilities or personnel. Although the plan is to be updated annually, senior officials at DHS’s Office of Operations Coordination and Planningthe office responsible for coordinating and facilitating development of the plan among the DHS componentsstated that the plan has not been revised or updated in the 4 years since it was finalized. According to these officials, DHS components have undertaken related planning efforts, such as establishing local-level coordination mechanisms to increase coordination and information sharing along the southwest border. In addition, officials at DHS’s Office of Operations Coordination and Planning stated that they do not plan to update the Operations Plan for Southwest Border Violence at this time because DHS has shifted to a more strategic approach to planning that will provide the framework for all of DHS’s planning efforts. The officials could not provide additional details on what the new strategic approach would entail because it is still in the early stages of development. To complete its framework, DHS is awaiting approval of planning guidance that it submitted to the President in June 2012. DHS developed the planning guidance pursuant to Presidential Policy Directive 8, a directive that called for DHS to develop an integrated set of guidance, programs, and processes to enable the nation to meet its national preparedness goals. DHS’s Office of Operations Coordination and Planning intends to develop DHS’s strategic framework in accordance with the new planning guidance and expects to complete the framework by October 2014. The officials said they will then decide whether to update the Southwest Border Violence Operations Plan so it follows the new planning guidance or replace the operations plan with other plans developed under the strategic framework. At the state and local levels, officials from all law enforcement agencies that we spoke with stated that their agencies had undertaken some efforts, either individually or in partnership with other agencies, to combat criminal activities often associated with spillover crime, such as drug and human smuggling. Generally, these efforts aim to increase state and local law enforcement agencies’ capacity to combat criminal activities associated with spillover crime, such as forming units that focus on such crime, participating in federal grant programs, coordinating enforcement activities, and facilitating information sharing. Specific examples of state and local law enforcement efforts are contained in appendix VII. We provided a draft of our report to, DHS, DOJ, and the Office of National Drug Control Policy for their review and comment. DHS provided written comments, which are reprinted in full in appendix VIII. In its comments, DHS stated that it was pleased with our discussion of the initiatives that law enforcement agencies have undertaken to target border-related crime, including a DHS contingency plan for responding to a significant southwest border violence escalation and interagency task forces that combat drug smuggling and cartel activity. In addition, DHS reiterated its commitment to working with many partners across the federal government, public and private sectors, and internationally, to mitigate spillover crime along the southwest border. DOJ and the Office of National Drug Control Policy did not provide official written comments. All three agencies provided technical comments which we have incorporated where appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Attorney General, the Director of the Office of National Drug Control Policy and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix IX. There are 24 U.S. counties that share a border with Mexico. These counties are arranged below by state, in an alphabetical order. This report addresses the following questions: (1) What information do reported crime rates in southwest border communities provide on spillover crime and what do they show? (2) What efforts, if any, have federal, state, and select local law enforcement agencies made to track spillover crime along the southwest border? (3) What concerns, if any, do these agencies have about spillover crime? (4) What steps, if any, have these agencies taken to address spillover crime? To address the first question, we analyzed Summary Reporting System (SRS) data from the Federal Bureau of Investigation’s (FBI) Uniform Crime Reporting (UCR) Program—the government’s centralized repository for crime data—from January 2004 through December 2011 for the four southwest border states (Arizona, California, New Mexico, and Texas). We selected January 2004 as the initial date because it provided us with data for more than 2 years prior to December 2006, when Mexican President Felipe Calderón took office and began a major military offensive against Mexican drug cartels. We also analyzed UCR’s National Incident-Based Reporting System (NIBRS) data, available from January 2007 through December 2010, for the single southwest border law enforcement agency reporting such data—the sheriff’s office in Yuma County, Arizona. To assess the reliability of the UCR data, we conducted analyses to test for irregularities in the data, reviewed FBI documentation on how the data can and cannot be used and on the FBI’s procedures for ensuring UCR data quality, and interviewed FBI officials knowledgeable about the data. On the basis of this assessment, we excluded some counties from our analysis because they did not report complete crime data to the FBI. We concluded that the data for the remaining counties were sufficiently reliable for the purposes of our review. In addition, we reviewed crime reports and documentation on crime databases published by the FBI, state agencies, and local law enforcement agencies in the four southwest border states. To further determine the types of data that are systematically collected, how these data are recorded and used in southwest border counties, and what information these data provide on spillover crime, we reviewed guidance documents and research reports developed by federal agencies, such as the Department of Justice (DOJ) and Congressional Research Service. Also, we interviewed knowledgeable officials from a total of 37 state and local agencies on the southwest border that are responsible for investigating and tracking crime occurring in their jurisdictions. At the state level, we conducted interviews with officials from the California Highway Patrol and the Arizona, New Mexico, and Texas Departments of Public Safety. At the local level, we interviewed officials representing 21 of 24 sheriffs’ offices in southwest border counties (4 in Arizona, 2 in California, 3 in New Mexico, and 12 in Texas), and 12 large municipal police departments in these border counties (4 in Arizona, 3 in California, 1 in New Mexico, and 4 in Texas). We selected departments from each of four states, and we selected large departments because according to our review of the UCRSRS data, in general, large departments had more reported crimes than did smaller departments. A list of the 24 southwest border counties can be found in appendix I. Moreover, to obtain information on spillover crime and efforts by law enforcement agencies along the U.S.-Mexico border to combat such crime, we conducted site visits to five southwest border counties in Arizona and Texas. These visits were to (1) Tucson, Pima County, Arizona; (2) Nogales, Santa Cruz County, Arizona; (3) Brownsville, Cameron County, Texas; (4) McAllen, Hidalgo County, Texas; and (5) Laredo, Webb County, Texas. We selected these locations because they represent diverse rural and urban environments, as well as have a range of border geographic features, such as rivers, mountains, agricultural deltas, and deserts that may pose different challenges for crossing the U.S. border from Mexico. These factors might have an effect on the levels and types of crime occurring in southwest border communities. As part of our visits, we met with federal officials, such as U.S. Customs and Border Protection (CBP) agents and officers operating between and at the ports of entry along the southwest border, state law enforcement officials from the Arizona Department of Public Safety, and local law enforcement officials, such as sheriffs in Santa Cruz and Hidalgo Counties and officials in the Tucson and Nogales Police Departments. The information we obtained from these visits is not generalizable to all southwest border counties. However, the information provides valuable insights into the types of crime information that are available to law enforcement agencies and perspectives on crime occurring in southwest border communities. To address the second question, we collected information, such as crime reports and documentation on categories of data collected, from and conducted interviews with state and local law enforcement agencies identified above, as well as federal agencies and interagency task forces that have responsibilities for combating drug cartel–related activities along the southwest border. Federal agencies include Department of Homeland Security (DHS) and DOJ headquarters and field offices, including DHS’s CBP, U.S. Immigration and Customs Enforcement (ICE), Office of Policy, Office of Operations Coordination and Planning, and intelligence offices, such as the Office of Intelligence and Analysis; as well as DOJ’s FBI; Drug Enforcement Administration (DEA); and Bureau of Alcohol, Tobacco, Firearms and Explosives. Interagency task forces— that is, partnerships of federal, state, and local law enforcement counterparts—include Arizona’s High Intensity Drug Trafficking Area, El Paso Intelligence Center, and Border Enforcement Security Task Forces in Arizona and Texas. State and local agencies include those identified above, as well as Arizona’s Alliance for Countering Transnational Threats, the Arizona Counter Terrorism Information Center, and members of the Texas Border Sheriff’s Coalition. We asked agencies about their efforts to track spillover crime, any challenges they encountered in doing so, and whether they collected or tracked other data they considered related to spillover crime and violence on the southwest border. Specifically, we analyzed CBP data on the number of assaults on Border Patrol agents in southwest border patrol sectors from fiscal years 2006 through 2012, and the number of assaults on Office of Field Operations personnel at southwest border ports of entry for fiscal years 2011 and 2012, the date ranges for which these data were available. To assess the reliability of the CBP data on assaults and other crimes against agents and personnel, we reviewed relevant documentation, such as procedures for collecting data consistently, and interviewed CBP staff responsible for the data. On the basis of our efforts, we determined the data to be sufficiently reliable for the purposes of our report. To address the third question, we analyzed threat assessments by federal agencies, covering the time period from 2004 through 2012, to determine the extent to which these agencies identified Mexican drug cartel–related threats facing southwest border communities and law enforcement agents in those communities. Specifically, we analyzed 4 DHS Office of Intelligence and Analysis assessments that focused on violence along the entire southwest border covering the time period from 2006 through 2011. In addition, we analyzed the total of 12 Border Patrol threat assessments and Operational Requirements-Based Budgeting Process documents containing threat information for the Laredo, Tucson, and Rio Grande Valley sectors: 1 assessment in sample fiscal years 2004, 2007, 2009, and 2012 per each sector to discern any trends in crime and violence along the southwest border over time. We selected the three Border Patrol sectors to correspond to the locations of our site visits. We selected these particular years because they approximate release dates of the DHS Intelligence and Analysis assessments to help identify potential similarities or differences in trends. To obtain additional context on potential threats facing southwest border communities, we reviewed several other assessments, such as National Drug Intelligence Center assessment (2011) and an Arizona Counter Terrorism Information Center assessment (2011), and other documentation, such as congressional reports and testimonies. To obtain perspectives on a range of concerns regarding the existence and potential effects of spillover crime, in addition to interviews with the officials from 37 state and local law enforcement agencies and federal officials identified above, we interviewed officials from Chambers of Commerce in four of the five counties we visited— Cameron, Hidalgo, Santa Cruz, and Webb Counties. While the results of these interviews are not generalizeable to all local businesses or Chambers of Commerce on the southwest border, they provide perspectives about the effects that violence in Mexico might have had on the businesses in their communities. To address the fourth question, we reviewed and analyzed information, such as fact sheets and contingency plans, from and conducted interviews with all of the federal, state, and local agencies and task forces previously discussed. We conducted this performance audit from January 2012 through February 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III provides information about differences between the UCR SRS and NIBRS. As shown in table 2, the SRS collects aggregate offense information for Part I offenses, and arrest information for Part I and Part II offenses. NIBRS collects offense information on each occurrence of crimes listed under Group A offenses and arrest information for Group A and Group B offenses. Table 3 summarizes the main differences between the two crime data systems. The FBI provided GAO with the 2011 SRS data when it publicly released these data in November 2012. According to the FBI, law enforcement agencies were able to revise these data until the end of the calendar year 2012. The FBI provided GAO with the 2011 SRS data when it publicly released these data in November 2012. According to the FBI, law enforcement agencies were able to revise these data until the end of the calendar year 2012. Local law enforcement agencies in the county did not submit complete data to the FBI. Local law enforcement agencies in the county did not submit complete data to the FBI. The FBI provided GAO with the 2011 SRS data when it publicly released these data in November 2012. According to the FBI, law enforcement agencies were able to revise these data until the end of the calendar year 2012. Local law enforcement agencies in the county did not submit complete data to the FBI. Local law enforcement agencies in the county did not submit complete data to the FBI. The FBI provided GAO with the 2011 SRS data when it publicly released these data in November 2012. According to the FBI, law enforcement agencies were able to revise these data until the end of the calendar year 2012. The FBI provided GAO with the 2011 SRS data when it publicly released these data in November 2012. According to the FBI, law enforcement agencies were able to revise these data until the end of the calendar year 2012. Local law enforcement agencies in the county did not submit complete data to the FBI. Local law enforcement agencies in the county did not submit complete data to the FBI. The FBI provided GAO with the 2011 SRS data when it publicly released these data in November 2012. According to the FBI, law enforcement agencies were able to revise these data until the end of the calendar year 2012. Local law enforcement agencies in the county did not submit complete data to the FBI. Local law enforcement agencies in the county did not submit complete data to the FBI. We analyzed UCR SRS crime data in the four southwest border states: Arizona, California, New Mexico, and Texas. This appendix presents the results of our analyses of SRS crime data broken out by violent and property crimes for southwest border counties, separately and combined within each state, for the period 2004 through 2011. We also present the results of analyses of violent and property crime data for nonborder counties, combined within each state, and compare the nonborder county crime rates per 100,000 population with border county crime rates. We also analyzed available NIBRS data, covering the period 2007 through 2010, for the Yuma County, Arizona, sheriff’s office. The office is the single southwest border law enforcement agency that collects NIBRS data. All border and nonborder counties. We analyzed SRS violent crime data for all 4 border counties in Arizona, both border counties in California, all 3 border counties in New Mexico, and all 15 border counties in Texas. We also analyzed these data for all 11 nonborder counties in Arizona, all 56 nonborder counties in California, 29 of 30 nonborder counties in New Mexico, and all 239 nonborder counties in Texas. The violent crime rate for the New Mexico border counties was lower in 2011 than in 2005, but the rate in New Mexico’s border counties decreased less than in its nonborder counties. For the border counties in each of the other states, we found that the violent crime rate was lower in 2011 than in 2004, and the rate in the border counties decreased more than in the nonborder counties. Specifically, as shown in figure 3, The violent crime rate in Arizona’s border counties was higher than in Arizona’s nonborder counties in each year from 2004 through 2011. However, the crime rate decreased in both, with the rate in border counties being 33 percent lower in 2011 than 2004, and the rate in nonborder counties being 22 percent lower. The violent crime rate in California’s border counties was lower than in California’s nonborder counties in each year from 2004 through 2011. For border counties, the rate was 26 percent lower in 2011 than in 2004. The violent crime rate in California’s nonborder counties generally decreased and was 25 percent lower in 2011 than in 2004. The violent crime rate in New Mexico’s border counties was lower than in New Mexico’s nonborder counties in each year from 2005 through 2011. The decrease in crime rate in border counties (8 percent) was smaller than the decrease in nonborder counties (19 percent). The violent crime rate in Texas’s border counties was lower than in Texas’s nonborder counties in each year from 2004 to 2011. For border counties, the rate was 30 percent lower in 2011 than in 2004, while the rate for nonborder counties was 24 percent lower. Large border counties. We analyzed SRS violent crime data for all 13 large southwest border counties—that is, counties with populations of 25,000 or more—that submitted sufficiently complete data to the FBI to enable us to calculate the violent crime rate. Of these, in 10 of the 12 large border Arizona, California, and Texas counties, the rate was lower in 2011 than in 2004. In 2 large border counties in Texas, the violent crime rate increased (see fig. 1). Specifically, (1) in Maverick County, Texas, the violent crime rate increased by 6 percent; and (2) in Val Verde County, Texas, the violent crime rate increased by 41 percent, largely because of an increase in aggravated assaults. Although lower in 2011 than in 2004, the violent crime rate in Cochise County, Arizona, increased 20 percent from 2010 to 2011, principally because of an increase in aggravated assaults. The violent crime rate in Dona Ana County, New Mexico, was lower in 2011 than in 2005. However, the rate increased 5 percent between 2010 and 2011, largely because of increases in robberies and aggravated assaults. Comparing UCR SRS and NIBRS data for the Yuma County sheriff’s office—the single southwest border law enforcement agency that reports NIBRS data—we found comparable decreases in violent crimes. Specifically, we found that the total number of violent crimes reported through NIBRS was 32 percent lower in 2010 than in 2007, when the office began reporting NIBRS data. The number of violent crimes reported in the SRS format was 33 percent lower in 2010 than in 2007. Overall, the total number of violent crime offenses reported by the Yuma County sheriff’s office through NIBRS was about 1 percent higher than those reported through the SRS. Small border counties. The southwest border has 9 small counties— that is, counties with populations of less than 25,000. The average combined population of these 9 counties from 2004 through 2011 was about 46,000. Our analysis of SRS violent crime data for 7 of the 9 counties with sufficiently complete data shows that the total number of reported violent crimes in these small counties decreased by 55 percent, that is, from a total of 93 violent crimes in 2004 to 42 in 2011 (see fig. 4). All border and nonborder counties. We analyzed SRS property crime data for both border counties in California, all 3 border counties in New Mexico, and all 15 border counties in Texas. We also analyzed the data for the nonborder counties in California, New Mexico, and Texas. For the border counties in California and Texas, we found that the reported property crime rate in 2011 was lower than in 2004, and the rate in the border counties decreased more than in the nonborder counties. The rate for New Mexico border counties was lower in 2011 than in 2005, but the rate in New Mexico’s border counties decreased less than in its nonborder counties. Specifically, as shown in figure 5, Each year from 2009 through 2011, the property crime rate in California’s border counties was lower than the rate in California’s nonborder counties; and each year from 2004 to 2008, the rate in border and nonborder counties was similar. For border counties, the rate was 35 percent lower in 2011 than in 2004. The property crime rate in California’s nonborder counties decreased each year and was 23 percent lower in 2011 than in 2004. The property crime rate in New Mexico’s border counties was lower than in New Mexico’s nonborder counties in each year from 2005 to 2011. The decrease in crime rate in border counties (7 percent) was smaller than the decrease in nonborder counties (18 percent). The property crime rate in Texas’s border counties was similar to the rate in nonborder counties in nearly all years. However, the crime rate decreased in both, with the rate in border counties being 28 percent lower in 2011 than 2004, and the rate in nonborder counties being 22 percent lower. Large border counties. We analyzed SRS property crime data for the 12 large southwest border counties that submitted sufficiently complete data to the FBI to enable us to calculate the reported property crime rate. Of these, in all 11 large border counties in Arizona, California, and Texas, the SRS data showed that the crime rate was lower in 2011 than in 2004, although there was variability in the rate in some counties, such as Cochise County, Arizona, and Val Verde County, Texas, over the years (see fig. 1). The reported property crime rate in Dona Ana County, New Mexico, was lower in 2011 than in 2005. Comparing UCR SRS and NIBRS data for the Yuma County sheriff’s office, we found that both showed a decrease in property crimes. Specifically, the total number of property crimes reported through NIBRS was 27 percent lower in 2010 than in 2007, when the office began reporting NIBRS data. The number of property crimes reported in the SRS format was 33 percent lower in 2010 than in 2007. Overall, the total number of property crime offenses reported through NIBRS was about 24 percent higher than those reported through in the SRS format. Small border counties. Our analysis of SRS data for 7 of 9 counties with sufficiently complete data shows that the total number of reported property crimes in these small counties decreased by about 29 percent, that is, from a total of 701 crimes in 2004 to 497 in 2011 (see fig. 6). We excluded Hidalgo County, New Mexico, and Presidio County, Texas, because SRS property crime data local law enforcement agencies submitted to the FBI were incomplete. The average combined total population for the 7 counties from 2004 through 2011 was about 36,000. Analysis of assault trends for fiscal years 2006 through 2012 by Border Patrol sector is presented in figure 7 and source data for the analysis are presented in table 4. Move mouse over the sector name to learn more about the sector. U.S. Customs and Border Protection’s Border Patrol has divided geographic responsibility for border security operations along the southwest border among nine sectors, each of which has a headquarters with management personnel. Select efforts by federal, state, and local law enforcement agencies to address crime along the southwest border are presented in tables 5 and 6. Appendix IX: GAO Contact and Staff Acknowledgments Error! No text of specified style in document. In addition to the contact named above, Rebecca Gambler, Director; Cindy Ayers, Assistant Director; Evi Rezmovic, Assistant Director; David Alexander; Hiwotte Amare; Eric Hauswirth; Margaret McKenna; Erin O’Brien; Yanina G. Samuels; and Julia Vieweg made significant contributions to the work. | Drug-related homicides have dramatically increased in recent years in Mexico along the nearly 2,000-mile border it shares with the United States. U.S. federal, state, and local officials have stated that the prospect of crime, including violence, spilling over from Mexico into the southwestern United States is a concern. GAO was asked to review crime rates and assess information on spillover crime along the border. Specifically, this report addresses: (1) What information do reported crime rates in southwest border communities provide on spillover crime and what do they show? (2) What efforts, if any, have federal, state, and select local law enforcement agencies made to track spillover crime along the southwest border? (3) What concerns, if any, do these agencies have about spillover crime? (4) What steps, if any, have these agencies taken to address spillover crime? GAO analyzed crime data from all of the 24 southwest border counties from 2004 through 2011 and federal documentation, such as threat assessments and DHS's plans for addressing violence along the southwest border. GAO interviewed officials from DHS and DOJ and their components. GAO also interviewed officials from 37 state and local law enforcement agencies responsible for investigating and tracking crime in the border counties in the four southwest border states (Arizona, California, New Mexico, and Texas). While the results of the interviews are not generalizable, they provided insights. GAO is not making any recommendations. DHS provided comments, which highlighted border-related crime initiatives recognized by GAO. The Federal Bureau of Investigation's (FBI) Uniform Crime Reporting (UCR) Program, the government's centralized repository for crime data, provides the only available standardized way to track crime levels in border counties over time. However, UCR data lack information on whether reported offenses are attributable to spillover crime, and have other limitations, such as underreporting to police. Also, UCR data cannot be used to identify links with crimes often associated with spillover from Mexico, such as cartel-related drug trafficking. Cognizant of these limitations, GAO's analysis of data for southwest border counties with sufficiently complete data show that, generally, both violent and property crimes were lower in 2011 than in 2004. For example, the violent crime rate in three states' border counties was lower by at least 26 percent in 2011 than in 2004 and in one other state lower by 8 percent in 2011 than in 2005. Law enforcement agencies have few efforts to track spillover crime. No common federal government definition of such crime exists, and Department of Homeland Security (DHS) and Department of Justice (DOJ) components, including those with a definition, either do not collect data to track spillover crime, or do not maintain such data that can be readily retrieved and analyzed. However, several components collect violent incident data that could serve as indirect indicators of spillover crime. For example, GAO analysis of U.S. Customs and Border Protection (CBP) data show that, generally, assaults on agents between southwest border ports of entry were about 25 percent lower in 2012 than in 2006. State and local law enforcement agencies, except for one state agency, do not track what might be considered to be spillover crime because they lack a common definition and do not systematically collect these crime data in a way that can be used to analyze trends. Officials from 22 of 37 state and local agencies told GAO that they have limited resources to collect additional data. Since April 2012, DHS and the Texas Department of Public Safety have coled an effort to propose definitions and metrics for border-related crime by March 2013. Law enforcement agencies have varying concerns regarding the extent to which violent crime from Mexico spills into southwest border communities. While DHS and DOJ threat assessments indicate that violent infighting between drug cartels has remained largely in Mexico, DHS assessments also show that aggressive tactics used by traffickers to evade capture demonstrate an increasing threat to U.S. law enforcement. Also, officials in 31 of the 37 state and local agencies stated that they have not observed violent crime from Mexico regularly spilling into their counties; nonetheless, officials in 33 of the 37 agencies were at least somewhat concerned, for example, for the safety of their personnel or residents. Law enforcement agencies have undertaken initiatives to target border-related crime, including one effort to address violent crime spilling over from Mexico. For example, in October 2008, DHS developed a contingency plan for the possibility that a significant southwest border violence escalation may exceed DHS assets' ability to respond. In addition, officials from all state and local law enforcement agencies that GAO spoke with said their agencies had undertaken some efforts, either individually or in partnership with others, to combat criminal activities often associated with spillover crime, such as drug and human smuggling. |
From an insurance standpoint, measuring and predicting terrorism risk is challenging. According to standard insurance theory, four major principles contribute to the ability of insurers to estimate and cover future losses: the law of large numbers, measurability, fortuity, and the size of the potential losses. When determining whether to offer coverage for a particular risk and at what price, insurers evaluate whether sufficient information exists about each of these principles. To underwrite insurance—that is, decide whether to offer coverage and what price to charge—insurers consider both the likelihood of an event (frequency) and the amount of damage it would cause (severity). As we have reported, measuring and predicting losses associated with terrorism risks can be particularly challenging for reasons including lack of experience with similar attacks, difficulty in predicting terrorists’ intentions, and the potentially catastrophic losses that could result from terrorist attacks. Increasingly, insurers use sophisticated modeling tools to assess terrorism risk, but there have been very few terrorist attacks, so there are little data on which to base estimates of future losses, in terms of frequency or severity, or both. When Congress passed TRIA in 2002, its purposes included making terrorism insurance widely available and affordable for businesses. As required by TRIA, insurers must make terrorism coverage available to commercial policyholders, although commercial policyholders are not required to buy it. As shown in table 1, many lines of commercial property and casualty insurance are eligible for TRIA, but the legislation specifically excludes certain lines. For example, the law excludes personal property and casualty insurance, as well as health and life insurance. TRIA requires an insurer to make terrorism coverage available to its policyholders for insured losses that does not differ materially from the terms, amounts, and other coverage limitations applicable to losses arising from events other than acts of terrorism. For example, an insurer offering $100 million in commercial property coverage must offer $100 million in coverage for property damage from a certified terrorist attack. Insurers can charge a separate premium to cover terrorism risk, although some include the price in their base rates for all-risk policies. Under the current program, Treasury would reimburse insurers for a share of losses associated with certain certified acts of foreign or domestic terrorism. A single terrorist act must cause at least $5 million in insured losses to be certified; separately, the aggregate industry insured loss from certified acts must be at least $100 million for government coverage to begin (program trigger). If an event were to be certified as an act of terrorism and the insured losses exceed the program trigger, then an individual insurer that experienced losses would pay a deductible of 20 percent of its previous year’s direct earned premiums in TRIA-eligible lines (insurer deductible). After the insurer pays its deductible, the federal government would reimburse the insurer for 85 percent of its losses and the insurer would be responsible for the remaining 15 percent (coshare). Annual coverage for losses is limited––aggregate industry insured losses in excess of $100 billion are not covered by private insurers or the federal government (cap). See figure 1 for an illustration of these program parameters. The amount of federal loss-sharing varies with the amount of industry insured losses, as the following shows: In general, for an event with insured losses of less than $100 million, private industry covers the entire loss and the federal government faces no responsibility to cover losses. In general, for an event with insured losses from $100 million to $100 billion private industry and the federal government initially share the losses, but TRIA includes a provision for mandatory recoupment of the federal share of losses when private industry’s uncompensated insured losses are less than $27.5 billion. Treasury must impose policyholder premium surcharges on all property and casualty insurance policies until total industry payments reached the mandatory recoupment amount or the government is fully repaid, whichever comes first. The mandatory recoupment amount is the difference between $27.5 billion and the aggregate amount of insurers’ uncompensated insured losses. This industry aggregate retention amount was set in the 2005 reauthorization for the year 2007 at $27.5 billion and extended as applicable for all future years under the program by the 2007 reauthorization. When the amount of federal assistance exceeds any mandatory recoupment amount, TRIA also allows for discretionary recoupment, if Treasury determines additional amounts should be recouped. Under TRIA, any discretionary recoupment would be based on the ultimate cost to taxpayers, the economic conditions in the marketplace, the affordability of insurance for small and medium-sized businesses, and any other factors Treasury considered appropriate. As initially enacted, one of the purposes of TRIA was to provide a transitional period in which the insurance market could determine how to model and price terrorism risk. Congress reauthorized TRIA twice––in 2005 and 2007. As shown in table 2, the reauthorizations changed several aspects of the terrorism risk insurance program, including the insurer deductible, lines of insurance covered, and types of terrorist acts covered (added domestic terrorism). TRIA covers insured losses resulting from an act of terrorism, which is defined, in part, as a “violent act or an act that is dangerous” to human life, property, or infrastructure. The act is silent about losses from attacks with nuclear, biological, chemical, or radiological weapons (NBCR) or from cyber terrorism. TRIA authorizes Treasury to administer the Terrorism Insurance Program. Specifically, the Terrorism Risk Insurance Program Office within Treasury’s Office of Domestic Finance administers the program and manages day-to-day operations, with oversight and assistance from the Federal Insurance Office, according to Treasury officials. In 2004, Treasury issued regulations to implement TRIA’s procedures for filing claims for payment of the federal share of compensation for insured losses. Upon certification of an act of terrorism, Treasury will activate a web-based facility for receiving claims from insurers and responding to insurers that seek assistance. According to Treasury, currently five staff work directly on the program and the program is assisted by others in Treasury. Staff responsibilities include managing contractors in place to process claims in the event of an attack and making any necessary changes to program regulations. According to Treasury, the spending for this program has generally declined since 2003 (see fig. 2). TRIA mandates various studies and data compilation efforts. For example, TRIA requires GAO, Treasury, and the President’s Working Group on Financial Markets (PWG) to complete various studies related to terrorism risk insurance. We have completed and submitted to Congress several mandated studies on TRIA. Treasury completed an assessment of the program and submitted a report to Congress in 2005. PWG must periodically report on terrorism market conditions (in 2006, 2010, and 2013). TRIA also requires Treasury to annually compile information on the terrorism risk insurance premium rates of insurers for the preceding year. In the event that information is not otherwise available to Treasury, Treasury may require each insurer to submit that information to NAIC. We discuss data compilation requirements in more detail later in this report. Insurance in the United States is primarily regulated at the state level. The insurance regulators of the 50 states, the District of Columbia, and the U.S. territories created and govern NAIC, which is the standard- setting and regulatory support organization for the U.S. insurance industry. Through NAIC, state insurance regulators establish standards and best practices, conduct peer review, and coordinate their regulatory oversight. According to NAIC, insurers set the rates for terrorism coverage, and state law requires insurers to file those rates (and to file insurance forms) with state regulators. Generally state insurance regulators receive information from insurers regarding the products the insurers plan to sell in the state. States vary with regard to timing and depth of the reviews of the insurers’ rates and contractual language. Many state laws have filing and/or review exemptions that apply to large commercial policyholders. For exempt commercial policyholders, state insurance regulators perform neither rate nor form reviews because it is presumed that these large businesses have a better understanding of insurance contracts and pricing than the average personal-lines consumer and as such, are able to effectively negotiate price and contract terms with the insurers. Comprehensive data on the terrorism risk insurance market are not readily available. In general, individual insurers maintain data on the terrorism coverage they underwrite. While Treasury has obtained some market data from industry sources, those data are limited because they do not include information from the entire industry. Federal internal control standards state that agencies should identify and obtain relevant and needed data to be able to meet program goals. TRIA requires Treasury to annually compile information on the terrorism risk insurance premium rates of insurers, and if the information is not available permits Treasury to require insurers to submit that information to NAIC, but Treasury has not taken either action. Without comprehensive market data, including the number of insurers in the market and whether differences exist in pricing or take-up rates, Treasury may not have a full understanding of the terrorism risk insurance market and be unable to assess whether TRIA’s program goals of helping to ensure the continued widespread availability and affordability of terrorism risk insurance and addressing market disruptions are being met. Furthermore, Treasury has conducted limited analysis of the federal government’s fiscal exposure under different scenarios of potential terrorist attacks. Analyzing such risks is a federal internal control standard and insurance industry best practice. Without additional analyses, Treasury does not have enough information to help understand the potential magnitude of federal fiscal exposure in the event of a certified terrorist attack, and will not be in the position to provide Congress with analysis to inform decisions about reauthorization, including any changes that would limit exposure. Comprehensive market data on terrorism insurance, including premiums and the number of insurers underwriting terrorism risk, are not readily available. In general, individual insurers maintain data on the terrorism coverage they underwrite, including data on the percentage of policies with terrorism coverage and premiums for such coverage. However, these data are proprietary and are not publicly available. Further, NAIC manages an electronic system that many insurance companies use to file premium rates and policy language with state regulators for approval. But NAIC officials stated that they generally cannot extract terrorism coverage information from it, such as the number of insurers providing such coverage or the prices charged. Moreover, using the property and casualty information NAIC collects likely would overestimate any numbers on terrorism insurance because not all policies provide terrorism coverage. NAIC officials told us that obtaining a complete view of the terrorism insurance market would require reviewing insurers’ filings with each state. Many insurance industry organizations are important sources of data on the terrorism insurance market, but data from these sources are also limited in some respects. For example, although A.M. Best, an insurance- rating and information organization, and the Insurance Services Office, Inc. (ISO), an advisory organization and data and analytics provider, collect premium information from their clients, these data are not publicly available and may not be representative of the entire industry. Insurance brokers also compile market data such as pricing, take-up rates, and coverage by industry sector from their clients. Similarly, some of these data are not publicly available and are not representative of the entire market. Treasury officials told us they periodically consulted with industry participants to obtain information about the terrorism insurance market, such as take-up rates, pricing, and capacity, but noted that the industry relies on the two largest insurance brokers for this information. However, as discussed above, the information that brokers compile is not comprehensive because it does not include detailed data on terrorism coverage from the insurance industry as a whole. In addition, PWG also solicited comments from the insurance industry on the availability and affordability of terrorism risk insurance for three studies mandated by TRIA. For example, according to Treasury officials, 29 entities submitted comments for the 2014 PWG report. In addition, Treasury conducted numerous interviews with industry participants. However, the comments PWG solicited and received from industry participants for these studies generally were anecdotal representations from the organizations that chose to submit information, rather than comprehensive data representing the entire industry. Treasury officials acknowledged that their information on the terrorism insurance market could be supplemented with more detailed information. Furthermore, TRIA requires Treasury to annually compile information on the terrorism insurance premium rates of insurers, and if the information is not available, also permits Treasury to require insurers to submit that information to NAIC, but Treasury has not taken either action. According to Treasury officials, Treasury has not compiled information on an annual basis, and has not collected market data on terrorism risk insurance directly from insurers. Treasury officials told us this was because the agency has periodically collected market data on terrorism risk insurance for the three PWG reports from industry sources, which has been sufficient for purposes of responding to TRIA’s reporting requirements. If the premium rate information was not otherwise available, TRIA states that Treasury may require each insurer to submit the information to NAIC, which then would make the information available to Treasury. Treasury officials noted that the premium rate data TRIA requires Treasury to compile may not be the only helpful data points for understanding the terrorism insurance market and said that additional baseline data would be crucial for a more detailed analysis. The officials said they may seek additional market data and evaluate whether the sources previously used were adequate. While TRIA states that Treasury has the authorities necessary to carry out the terrorism risk insurance program, including prescribing regulations and procedures to effectively administer and implement it, whether these authorities allow Treasury to collect comprehensive market data directly from insurers is unclear. However, federal internal control standards state that agencies should identify and obtain relevant and needed data to be able to meet program goals. As stated earlier, the purposes of the terrorism insurance program include helping to ensure the continued widespread availability and affordability of property and casualty insurance for terrorism risk and addressing market disruptions. Without comprehensive, nationwide market data, including the number of insurers in the market and whether differences exist in pricing or take-up rates for companies of different sizes, industries, or geographic locations, Treasury might not have a full understanding of the terrorism risk insurance market, including how changing program parameters may impact the market. Treasury may also be unable to assess whether the program is meeting its goals of helping to ensure the continued widespread availability and affordability of terrorism risk insurance and addressing market disruptions. Treasury has conducted limited analysis to help estimate the potential magnitude of the federal government’s fiscal exposure under TRIA under different scenarios of potential terrorist attacks. We developed a conceptual framework for fiscal exposures to aid discussion of long-term costs and uncertainties that present risks for the federal budget. Fiscal exposures vary widely by source, extent of the government’s legal commitment, and magnitude. Fiscal exposures may be explicit (the government is legally required to fund the commitment) or implicit (exposures arise not from a legal commitment, but from current policy, past practices, or other factors that may create the expectation for future spending). The government’s legal commitment to pay losses when a certified terrorist event occurs makes the terrorism risk insurance program an explicit exposure. The amount of federal spending resulting from the fiscal exposure under the terrorism risk insurance program depends on the extent of insured losses. In 2009, Treasury contracted with ISO to develop and implement a method for estimating total average annual insured terrorism losses in the aggregate for TRIA-eligible lines, review certain material, and advise on the appropriateness of its use for projecting potential payout rates of the federal share of insured losses. The study provides estimates (both gross and net) of the federal share of losses and was used to aid Treasury in its development of a federal budget item for the terrorism risk insurance program. ISO representatives stated that it was important to understand that the study provides an estimate of average annual losses in any given year, but in years with losses, the numbers likely would be significantly higher than the average. ISO representatives also noted that the study had some data limitations and relied on assumptions such as take-up rates for terrorism coverage that could affect the results of the analysis. This is the only study Treasury has commissioned that examines the potential overall fiscal exposure of the terrorism risk insurance program to the federal government. In addition to the ISO study, Treasury officials provided us with a hypothetical loss scenario that shows private-sector and federal loss sharing under a specific set of circumstances. According to Treasury officials, this example was not an official work product and they emphasized that they developed the example purely to illustrate the recoupment calculations and it should not be considered as a projection of the fiscal exposure of a terrorist event to the government. The exact amount of government spending or the government’s obligation is difficult to predict because, among other factors, it depends on the distribution of losses among insurers. For example, the aggregated 20 percent deductible equaled $37 billion (20 percent of direct earned premiums for TRIA-eligible lines), according to our analysis of SNL Financial’s 2012 insurance data. However, losses from a terrorist attack are highly unlikely to affect all insurers or be distributed evenly among all insurers. As a result, if fewer insurers had losses, the deductible amount would be lower and the government’s share of losses likely would be triggered at an amount less than the aggregated industry deductible. Therefore, the government’s spending or obligation likely would begin at an amount less than the industry’s aggregated deductible. Federal internal control standards state that agencies should identify and analyze risks associated with achieving program objectives, and use this information as a basis for developing a plan for mitigating the risks. For example, because the amount of the government’s fiscal exposure varies according to the specific program’s design and characteristics, estimates could be developed to better understand the potential costs of changes to certain program parameters under various scenarios of potential terrorist attacks. This could increase the attention given to fiscal exposures, while also providing decision makers relevant information to consider when determining the best way to achieve various policy goals or design a program. According to the insurers and other industry participants we spoke to, insurers’ best practices also show that an insurer’s analysis of the location and amount of coverage written is prudent for understanding the financial risks of a potential terrorist attack of a specific size. Insurers work with terrorism risk modeling firms to help understand their potential financial exposure from a future terrorist attack. For example, to help illustrate how an insurer would be financially affected after a terrorist event and how losses would be shared between the private sector and the federal government under TRIA, some industry participants have developed hypothetical scenarios. According to a study published by the Wharton Risk Management and Decision Processes Center, insurers use such scenarios to determine their maximum exposure to a range of possible attacks. Ultimately, the amount of fiscal exposure created by TRIA will be determined by the program parameters and the specific circumstances of a future attack (such as the number of insurers affected and number of businesses that had purchased terrorism coverage). However, these scenarios can be used to help understand risk and the impact of the financial losses under TRIA under specific scenarios of potential terrorist events, and to analyze losses if TRIA were not renewed (that is, if the private sector would be responsible for all losses). In addition to information on the type of attack (for example, damage from 2- to-10 ton truck bombs), these scenarios can rely on estimates of insurers’ market share and direct premiums earned, among other data points. For public policy purposes, in 2014 the Reinsurance Association of America developed a model to help participants evaluate various loss scenarios. We also have developed hypothetical examples to help illustrate the potential magnitude of the federal government’s fiscal exposure, which will be discussed later in this report. Treasury officials said that they have conducted limited analysis on the government’s fiscal exposure under TRIA because the amount of the government’s fiscal exposure is ultimately determined by program parameters and the risk modeling and exposure analyses used by insurers are not entirely applicable in understanding how to reduce federal exposure. According to Treasury officials, insurers manage risk by first understanding and then limiting their exposures by insurance line or geographic location. Fiscal exposure under TRIA is limited by the program parameters and the circumstances of a future attack (such as number of insurers affected and number of businesses that had purchased terrorism coverage). Treasury officials also said that the amount of fiscal exposure is difficult to determine because it is shaped by variables, such as geography, type of event, and number of affected insurers. However, Treasury officials acknowledged that hypothetical analyses that provide illustrative analyses and estimate the potential total amount of losses may be helpful in understanding fiscal exposure. Without analyzing comprehensive market data on the type and amount of coverage provided from all insurers participating in the market, Treasury does not have enough information to help understand potential federal spending under various scenarios of potential terrorist attacks. In addition, Treasury is not in the position to provide Congress with analysis to inform decisions about reauthorization and the future structure of the program, including any changes that would limit exposure, one of the goals that Treasury recently articulated in its 2015 budget justification. Available data on the market for terrorism risk insurance generally indicate a stable market in recent years. Total terrorism insurance premiums, which make up a small percentage of insurers’ overall premiums, increased after the original act and reached a high in 2007, then declined, and have stabilized since 2010. Insurers report capacity to provide terrorism coverage over the past decade has remained unchanged. In general, prices appeared to have decreased as the number of businesses buying terrorism coverage (take-up rates) increased from 2003 to 2006, but have been constant since 2010. The transference of terrorism risk through reinsurance or alternatives to reinsurance, such as insurance-linked securities (catastrophe bonds), has remained limited. Available data show terrorism insurance premiums have stabilized over the past few years (see fig. 3). For instance, total premiums generally increased through 2007, then declined, and stabilized from 2010 through 2012. In 2012, the most recent year for which data are available, estimated terrorism insurance premiums were $1.7 billion, down from a high of $2 billion in 2007. A.M. Best’s estimates about $17 billion was collected for terrorism insurance premiums from 2004 through 2012. Furthermore, terrorism insurance premiums collected on workers’ compensation and commercial property insurance lines each made up about 40 percent of the estimated total terrorism insurance premiums, with the remaining 20 percent from all other commercial lines. These proportions remained relatively stable in recent years. Based on our analysis of A.M. Best and SNL Financial insurance data, trends in terrorism insurance premiums have not differed markedly from trends in other commercial insurance line premiums. For example, premiums for all commercial property and casualty lines showed the same pattern––increases until 2007, declines from 2007 through 2010, and then increases in 2011 and 2012. Commercial property and casualty generally follows an insurance industry cycle, characterized by periods of soft market conditions––a market with abundant willingness to write new policies (capacity), increasing competition, and rates (prices) that grow marginally or decrease––followed by periods of hard market conditions–– in which capacity is relatively low, competition decreases, rates increase, and capital is scarce. This cyclical nature of the property and casualty industry likely plays a role in the hardness or softness of the terrorism insurance market. For example, the similarity in trends in premiums indicates that the terrorism insurance market is closely related to the overall commercial property and casualty market. The 2007-2009 financial crisis affected the overall commercial property and casualty market and most likely affected the terrorism insurance market in similar ways. For instance, the financial crisis generally affected commercial property and casualty insurers through decreased net income, as underwriting and investment results deteriorated. However, making an accurate assessment of the terrorism insurance market is challenging. According to industry participants, uncertainty surrounding the two previous TRIA reauthorizations––whether the program would be reauthorized and if so, with what changes––led to periods of market instability. Insurers told us their terrorism insurance premiums made up a very small amount of their overall premiums. As previously mentioned, we obtained information from 15 insurers as part of a questionnaire. According to the responses, on average terrorism insurance premiums made up less than 2 percent of commercial property and casualty premiums, or roughly $1.7 billion in calendar year 2012 (the range for the 15 insurers was 0.7 to 3 percent). An insurer told us terrorism insurance premiums have not significantly affected overall capital levels because premiums collected for terrorism risk have been low and insurers use some of the terrorism insurance premiums to account for reinsurance, expenses, and taxes. A.M. Best and SNL Financial data also indicate that, in terms of the share of total premium, coverage for terrorism risk is concentrated among the largest insurers. In the case of terrorism insurance premiums, according to A.M. Best data, 10 insurers made up roughly 70 percent of premium volume (see table 3). The same 10 insurers accounted for 44 percent of premiums in all insurance lines subject to TRIA and 39 percent of premiums in all commercial property and casualty lines. An industry representative with whom we spoke said only the largest insurers have the ability to underwrite large terrorism risks and hence account for a large portion of the industry’s terrorism insurance premiums. The composition of the terrorism insurance market resembles other insurance markets that the Federal Insurance Office has characterized as concentrated. For example, according to the Federal Insurance Office’s 2013 annual report, 10 insurers made up 47 percent of the property and casualty market (both commercial and personal) and 72 percent of the life and health insurance market, in 2012—both of which the report characterizes as concentrated markets. Although we present an estimated number of insurers that provide coverage for terrorism risk, identifying the precise number of insurers in this market is difficult because of the lack of comprehensive data. As noted previously, insurers are not required to report data about terrorism risks to Treasury or NAIC. As shown in table 3, according to SNL Financial data, more than 800 insurers reported premiums in insurance lines subject to TRIA and therefore, by law, offered coverage. A.M. Best’s survey data provides further context for this market. For 2012, A.M. Best estimated more than 200 insurers provided coverage for terrorism risk. Insurers providing coverage in the insurance lines subject to TRIA must offer terrorism coverage, but businesses are not required to buy it. Therefore, the number of insurers offering coverage in the insurance lines subject to TRIA (more than 800) and the number of insurers covering terrorism risk and collecting terrorism insurance premiums (estimated at more than 200) will differ. According to an insurance broker, capacity seems to have improved, but insurers report that capacity has remained the same and that they limit capacity as needed to manage their overall exposure. Capacity is the amount that insurers are willing to allocate to underwrite a specific risk. Terrorism coverage typically is embedded in an all-risk property policy and therefore available terrorism capacity is tied to overall capacity for all- risk property policies. According to information from an insurance broker, the reported market capacity for terrorism risk seems to have increased. According to an Aon report, in 2013 about $14 billion per risk was available to any one insured for an all-risk property policy. This amount increased from $13.5 billion in 2010 and $8 billion in 2005. This represents the amount of coverage an insurer is willing to provide to any one insured. However, the actual capacity for terrorism risk is much lower than $14 billion per risk because the amounts above encompass capacity for risks in addition to acts of terrorism. According to Aon, non-terrorism- related exposures, such as natural catastrophes (earthquake and windstorm), can vastly decrease the available capacity for terrorism risk. Moreover, individual insurers’ capacity to underwrite terrorism will differ, and insurers told us they would limit capacity as needed based on their aggregate terrorism exposures, geographic concentration of terrorism exposures, and terrorism exposures relative to other natural catastrophe exposures. Most insurer representatives with whom we spoke reported that capacity to provide terrorism coverage over the past decade remained constant and 6 out of the 15 insurers stated that they limited capacity as needed to manage their overall exposures. About half of the insurers told us TRIA enabled them to provide capacity for terrorism risks, but TRIA also was the reason why capacity has remained relatively unchanged—because insurers managed their exposures based on the program parameters. In general, insurers assume some financial risk when covering terrorism risk, but they also employ various underwriting standards to manage the risk and limit potential financial exposures. As we previously reported, insurers’ willingness to provide coverage in certain areas may change frequently as new clients or properties are added to or removed from their book of business. In response to our questionnaire, most insurers told us they determined the amount of coverage they were willing to provide in defined geographic areas, depending on their risk tolerance. These amounts are sometimes called coverage limits (capacity limits) and are managed in relation to overall terrorism exposures. Almost all insurers told us factors such as loss estimates from terrorism models, aggregation of exposures in defined areas, proximity of exposures to high-profile targets or buildings, and individual property characteristics affect their terrorism underwriting decisions. Insurers may decide to limit capacity; that is, decide not to underwrite certain coverage, if taking on the additional risk would exceed their internal capacity limits. A few insurers that we interviewed also told us that over the past decade they have benefitted from significant improvements to their data systems and models that track terrorism exposures; in turn, the better systems and models have improved their ability to make sound underwriting decisions when renewing or writing new policies. Prices have declined and insurers say TRIA has allowed them to offer coverage at prices policyholders are willing to pay. Insurers may charge an additional premium for terrorism coverage, as TRIA does not provide specific guidance on pricing. According to data from Marsh, prices for terrorism coverage, as part of a commercial property policy, generally declined over the past decade (see fig. 4). These data are not necessarily reflective of the entire market, but represent the best available data on pricing. While prices slightly increased from 2003 to 2004, prices steadily declined since 2006. In 2013, the nationwide median amount that businesses paid per million dollars of coverage for terrorism insurance was $27. Using the 2013 nationwide median rate, a company purchasing $100 million in coverage for property damage would have paid approximately $2,700 in terrorism insurance premiums. In addition, prices that businesses pay will vary depending on company size, location, and industry. For example, prices will typically decrease as the size of the company increases (size measured in terms of insured value and prices measured in millions of dollars of coverage), are typically higher in the Northeast, and higher in certain industry subsectors (such as construction, power and utilities, and media) due to perceived or actual risk exposure to terrorism. But because comprehensive pricing data are not readily available, it is difficult to clearly understand how prices differ by company size, location, or industry. According to data from Marsh, from 2003 to 2013, companies paid approximately from 4 to 9 percent of their total property premium for terrorism coverage (see fig. 5). Analyzing the price of terrorism coverage as a part of overall property premiums allows companies to understand how terrorism coverage affected their overall property insurance budget. Businesses paid no more than approximately 5 percent of their total property premium for terrorism coverage since 2011. Using the 2013 data, a company purchasing $100 million in property coverage would have paid approximately 4 percent of its $67,500 overall property premiums for terrorism coverage (or $2,700). Insurers told us TRIA allows them to offer coverage at prices their policyholders are willing to pay. Insurers said their primary concern for covering terrorism risks was limiting their exposures (that is, capacity) because the losses could be huge under certain types of terrorist attacks and that pricing was secondary. TRIA addresses insurers’ primary concerns about the size of potential losses—it provides a structure in which insurers know, before an event, what their losses could be because the deductible and coshare are defined by law and losses are capped. For most insurance products, insurers typically use the potential frequency and severity of events to calculate premiums that are commensurate with the risks. Because the frequency and severity of terrorism are difficult to predict, the limits established in TRIA, which cap the potential severity of losses to insurers, make underwriting the risk and determining a price for terrorism coverage easier for insurers. Furthermore, most insurers said their companies’ experiences with collecting terrorism insurance premiums and providing terrorism coverage over the past decade have had minimal or no impact on their pricing strategy. Insurer responses suggest this is mainly because terrorism is so different from other perils. For example, one insurer noted that with natural catastrophes, insurers have a long history of experience writing and pricing (based on claims), but this is not the case with terrorism. Another insurer noted that terrorism risk provides too few data points to inform pricing and underwriting decisions. Additionally, one insurer told us it is a very competitive market and they do not charge terrorism insurance premiums that would cover their potential losses from a terrorist attack. For example, this insurer noted that if insurers did charge premiums that would cover potential losses, businesses would not buy it. Take-up rates—which are the percentage of businesses buying terrorism coverage and help measure the demand for terrorism risk insurance— increased from 2003 to 2006 and have remained relatively constant (and above 60 percent) since 2010, according to data from Marsh (see fig. 6). According to Treasury and NAIC, neither collects this type of information. Take-up rate data for businesses buying terrorism coverage as part of commercial property policies are only available from insurance brokers. The take-up rate for businesses buying terrorism coverage as part of workers’ compensation policies is 100 percent because state laws require businesses to purchase workers’ compensation insurance and do not permit insurers to exclude terrorism from workers’ compensation policies. Take-up rates will vary depending on company size, location, and industry. For example, larger companies are more likely to purchase coverage than smaller companies, the Northeast has the highest take-up rates, and certain industry subsectors have higher take-up rates than others (for example, media, education, and financial institutions). According to our questionnaire results, overall take-up rates for insurers varied significantly (from 26 to 100 percent). One respondent noted that analyzing insurers’ overall take-up rates can be misleading and it is more appropriate to look at take-up rates for terrorism coverage in each line of insurance subject to TRIA. According to our questionnaire results, the lines of insurance with the highest take-up rates for terrorism coverage are commercial multiperil and inland marine, and the lines with the lowest take-up rates for terrorism coverage are aircraft and boiler and machinery. However, because Treasury and NAIC do not collect take- up rate data from insurers, it is difficult to thoroughly analyze take-up rates by line of insurance. According to industry participants, take-up rates may have reached a plateau—that is, most businesses that want the coverage already have purchased it. From 2003 through 2013, take-up rates doubled, while prices declined by 50 percent. In more recent years, take-up rates remained relatively constant, although prices continued to decline (as shown in figures 4 and 6). Since 2010, both take-up rates and estimated terrorism insurance premiums have been relatively stable (see fig. 7). The changing proportions of new versus renewal policies covering terrorism risk offer further evidence that demand may be leveling off. On the basis of our questionnaire results, the majority of policies are renewals rather than new issuances, and this has stayed the same over the past several years (on average, about 83 percent renewals in 2008 and in 2012). The transference of terrorism risk—namely, through reinsurance and alternatives to reinsurance such as insurance-linked securities—has been limited. Reinsurance capacity for terrorism risk has increased, but remains small relative to the federal reimbursements available through TRIA. For example, according to industry participants about $6 billion to $10 billion in terrorism reinsurance capacity was available in the United States in 2013, which was an increase from the $4 billion to $6 billion available several years ago, but was still small compared with the federal assumption of 85 percent of losses (up to $100 billion of aggregate industry exposure minus items such as the insurer deductibles) in TRIA. Without TRIA, current reinsurance capacity would be insufficient to respond to a large-scale terrorist attack, in particular up to the limits the government program provides, according to a reinsurance trade association representative. Additionally, terrorism reinsurance capacity is small in relation to capacity for other perils. For example, the total amount of reinsurance capacity available for natural catastrophe risks in the United States in 2012 ranged from $90 billion to $120 billion. Several factors limit the market for reinsurance of terrorism risk. For instance, unlike primary insurers, reinsurers are not subject to TRIA and therefore are not required to offer primary insurers coverage for terrorism risk. According to industry representatives, reinsurers face the same challenges as primary insurers––that is, terrorism risk is difficult to model and price, which also contributes to a limited market. Finally, insurers told us they typically purchase terrorism reinsurance as part of a multiperil policy that covers terrorism risk in addition to other risks. To help manage their exposures to concentrated losses, reinsurers frequently write terrorism coverage with specific limits for individual properties rather than reinsure a share of an insurance company’s overall holdings. According to brokers and reinsurers, terrorism reinsurance prices generally have declined by 50 percent over the past decade and more. Reasons these industry participants cite for the price declines include the passage of time since the September 11 attacks, the lack of subsequent terrorist attacks resulting in significant losses, decreased demand from primary insurers, and increased supply of reinsurance. However, the location of exposures also affects the price of terrorism reinsurance. For example, reinsurance coverage is more expensive for exposures in densely populated urban areas than less densely populated areas. Although individual insurers’ reinsurance patterns vary, insurers have been reinsuring a limited amount of their terrorism risk and retaining roughly 80 percent of it according to the 2010 PWG report. Insurers make decisions on how much reinsurance to purchase based on their perception of risk, price of coverage, ability to manage risk, and other factors. One insurer contributing to this report commented that terrorism risk reinsurance remains insufficient to serve the market’s current risk exposure. According to our questionnaire results, 13 insurers purchased reinsurance for terrorism risk and 2 did not. Some responding insurers that purchased reinsurance for terrorism risk noted an increase in their purchasing levels, some noted a decrease, and still others noted fluctuations in their purchasing patterns. One insurer purchased terrorism reinsurance coverage continuously since 2002 and increased its limits as capacity became available and pricing became more affordable. Two insurers said that their purchases of terrorism reinsurance decreased over time. The two insurers that did not purchase reinsurance for terrorism noted that while some reinsurers were willing to provide a modest capacity for terrorism risk, the cost was prohibitive for them. Additionally, insurers noted that potential modifications to TRIA would affect their demand for reinsurance. For example, potential modifications that would increase insurers’ deductible and coshare amounts would result in increased demand from primary insurers for reinsurance, but supply might stay the same. As an alternative to reinsurance, insurance-linked securities have remained a limited option for covering terrorism risk. Specifically, catastrophe bonds, insurance-linked securities that typically cover natural catastrophes, have been used over the past 20 years mainly because of the large amount of resources available in capital markets. Catastrophe bonds are risk-based securities that pay relatively high interest rates and provide insurance companies with a form of reinsurance to pay losses from natural catastrophes. A catastrophe bond offering typically is made through an investment entity that may be sponsored by an insurance or reinsurance company. The investment entity issues bonds or debt securities for purchase by investors, thus spreading risk. Catastrophe bonds, by tapping into the securities markets, offer the opportunity to expand the pool of capital available to cover a particular risk. Some insurers and reinsurers issue catastrophe bonds because they allow for risk transfer and may lower the costs of insuring against the most severe catastrophes (compared with traditional reinsurance). Although catastrophe bonds have become more common, two have been issued to date that cover terrorism risk and neither is explicitly a terrorism risk bond that covers risks included under TRIA. Each is a multi-event bond associated with the risks of natural disaster, pandemic, or terrorist attack. However, these bonds are mortality bonds and therefore would be an alternative for a life insurance policy (which is not a line of insurance eligible for TRIA) and not an alternative to commercial property and casualty insurance. As of April 2014, no property and casualty terrorism bonds have been issued. Industry representatives mentioned various challenges to issuing catastrophe bonds covering terrorism risk. Investors generally avoid risks not widely underwritten in reinsurance markets and therefore lack interest in such catastrophe bonds. Investors are reluctant to make investments in which losses may be correlated with widespread financial market losses (as was the case with terrorism losses after September 11, 2001) as well as low returns or payouts. Rating agencies have not been willing to use terrorism loss models that estimate the probability of terrorism events (probabilistic models) for rating purposes and at least for terrorism risk investors tend to avoid risks that cannot be credibly modeled and rated. The difficulty of modeling terrorism represents an additional overall challenge to the development of the private market for terrorism insurance. Models used to estimate terrorism risk have become more sophisticated in estimating the severity of specific events in recent years. However, they remain fundamentally different from those used to assess natural hazard risks, which estimate both the severity and probability. For example, according to the Reinsurance Association of America, terrorism modeling is primarily a means for underwriters to measure how much they have at risk in a given geographic area and losses from a specific type of event (that is, the severity of an event), not to estimate the probability of such events. Terrorism risk is unlike other catastrophic risks—such as earthquake or hurricane—in that terrorists can alter their behavior, which makes it hard to model the probability of potential events with the level of accuracy required to accurately price the coverage. There are relatively few instances on which to base probability estimates for acts of terror in the United States, which means that such estimates lack actuarial credibility. Additionally, insurers and modeling firms have no access to data used internally by U.S. intelligence and counterterrorism agencies. Moreover it may be impossible to build a model that provides a valid representation of all individuals and groups that might decide to try to use terrorism as a tactic against the United States. In addition, as opposed to other types of risks that are random to some extent, terrorist acts are intentional and terrorists continually attempt to defeat loss prevention and mitigation strategies. Insurers and other industry participants cited concerns about the availability and price of terrorism coverage if TRIA expired or was changed substantially, but some changes could reduce the government’s fiscal exposure. For example, some insurers we interviewed said they would stop covering terrorism risks if TRIA expired. In addition, most of the insurers we interviewed, including larger and smaller insurers, cited potential consequences associated with increasing the deductible or coshare, such as impacts on pricing, the need to reevaluate risk and capacity, and threats to their solvency in the event of a large industry loss. These concerns are consistent with points industry participants raised before previous reauthorizations of the program. However, several insurers told us they were less concerned about an increase to the aggregate retention amount or program trigger. Further, we found that increasing the deductible, coshare, or industry aggregate retention amount could reduce the government’s fiscal exposure under certain terrorist event scenarios. Responses to our questionnaire revealed that insurers were uncertain about whether TRIA covers risks from a cyber terrorism attack. Without clarification of the coverage of cyber risks, some insurers may not offer cyber coverage and the coverage may not be as available. The long-term impact of expiration of the terrorism risk insurance program’s authority is difficult to determine, but according to insurers, in the short run, the availability of terrorism coverage may become more limited. Some insurers told us that they will stop providing terrorism coverage if TRIA expires on December 31, 2014. As indicated by responses to our questionnaire and other surveys of insurers, some insurers already made regulatory filings or issued notices to policyholders indicating that terrorism coverage would be excluded from policies in force beginning on January 1, 2015, if TRIA expired. For example, one insurer said that if TRIA were not renewed, the company would either exclude terrorism coverage or not underwrite businesses in states that prohibit terrorism exclusions. Insurers could further limit terrorism exposures, particularly in geographic areas considered at high risk for attacks. Because some states prohibit excluding certain risks, if a large-scale event occurred in the absence of TRIA, some insurers could face higher risk of insolvency or have more incentives to leave the market. For example, New York state insurance law prohibits terrorism exclusions for property and casualty policies that include standard fire coverage. In some other states, property insurers must cover losses from fire regardless of the cause of the fire, including a terrorist attack, even if the policyholder declined terrorism coverage. Thus, if TRIA expired, insurers operating in these states still would have to cover damage from fire following a terrorist attack. Such situations might leave some insurers bearing risks they could not adequately reinsure and leave them at increased risk for insolvency. Some insurers might decide their exposures were too great without TRIA and exit the market or decline to insure commercial property altogether. Some industry observers have noted that, in the long term if no large losses occur, the private insurance market might be able to address the need for terrorism coverage without support from the program. The amount of insurance and reinsurance written is related, in part, to the amount of surplus held by insurers and reinsurers. Over time, the private insurers and reinsurers might develop additional terrorism capacity if there are no losses due to terrorism. If capacity did not increase in the terrorism insurance and reinsurance markets, the insurance-linked securities market might develop and insurers might increasingly attempt to access capital markets to help spread terrorism risk. One capital markets participant said that in the past catastrophic shocks have led to more interest in insurance-linked securities and accelerated issuance of natural catastrophe bonds and that the expiration of the program could similarly foster interest in bonds for terrorism risk. However, insurers and reinsurers continue to question whether the market can accurately price terrorism risk. As a result, insurers and reinsurers might continue to believe they were unable to accurately price for such risks and leave the market for terrorism risk insurance (as happened after September 11, 2001). Furthermore, because losses would no longer be capped, rating agencies might downgrade ratings for insurers and reinsurers, affecting the companies’ ability to raise capital. In the long term, policyholders (businesses) also might increase terrorism mitigation and deterrence efforts. For example, businesses might locate some operations away from high-risk areas, invest in mitigation measures (retrofitting properties to better withstand an attack or improve evacuation measures), or both. In the case of workers’ compensation, businesses unable to find coverage from insurers would have to obtain coverage from state funds, which might be more expensive than coverage from primary insurers. For example, representatives of one industry association told us that after the September 11, 2001, attacks, participation in state funds in New York, Washington, D.C., and Virginia increased, as some businesses were unable to find workers’ compensation coverage from primary insurers. Over time, businesses were able to leave the state funds and find coverage in the primary market, but workers’ compensation insurers became more selective about the number of employers they insured in a particular location. Finally, past experience following disasters suggests that the federal government may provide assistance to businesses after a terrorist event in the absence of a federal terrorism insurance program. For example, following the September 11, 2001, terrorist attacks, we reported in 2003 that Congress committed at least $18.5 billion to individuals, businesses, and government entities in the New York City area for initial response efforts, compensation for disaster-related costs and losses, infrastructure restoration and improvement, and economic revitalization. As we reported in 2009, many federal agencies and program components administer supplemental programs and funding, reprogram funds, or expedite normal procedures after a disaster. For example, forms of disaster assistance available from federal agencies include grants, loans, loan guarantees, temporary housing, counseling, technical assistance to state and local governments, and rebuilding or recovery projects. Following the April 2013 bombings in Boston, the federal government issued an emergency declaration for the state of Massachusetts that made federal assistance for equipment, resources, or protection available as needed. In 2012, we reported that the growing number of major disaster declarations contributed to an increase in federal expenditures for disaster assistance. For fiscal years 2004 through 2011, the federal government obligated more than $80 billion in disaster relief, about half of which followed Hurricane Katrina. And about $50 billion in federal assistance supported rebuilding efforts after Superstorm Sandy. As previously discussed, each of the program parameters—program trigger, deductible, coshare, and industry aggregate retention amount— have changed since the program was enacted. It is not clear what impact these past changes have had on insurers in the market, but insurers told us that they generally preferred no additional changes. According to responses from our questionnaire, 11 of 15 insurers said that the TRIA program trigger (currently $100 million) could be increased without significantly changing their ability to provide coverage. In particular, 6 of those 11 insurers noted their companies would be able to offer terrorism coverage if the program trigger were raised up to $500 million, while of the remaining 5 insurers, 4 said they could offer coverage if the trigger were raised up to $1 billion and one insurer said that the trigger could be increased to more than $1 billion. Insurers said that they could continue offering coverage under an increased trigger amount because their current deductibles under the program were higher than the program trigger and increasing the trigger would not impact their share of losses. For example, using 2012 data, the 10 largest insurers in TRIA-eligible lines all had deductibles much greater than $100 million. As stated previously, while government coverage is triggered once aggregate industry losses exceed $100 million, individual insurers that experienced losses would first pay their deductibles and only then be eligible to receive federal reimbursement for 85 percent of their losses. One insurer explained it was less concerned with changes to the program trigger than to the deductible and coshare percentages, because changes to the latter were more likely to have direct impacts on insurers’ liquidity and result in significant market disruptions. Further, we found that increases to the program’s parameters could reduce federal fiscal exposure in certain situations, as long as the private sector’s share of losses is below the industry aggregate retention amount of $27.5 billion. As previously discussed, TRIA includes a provision for mandatory recoupment of the federal share of losses when private industry’s uncompensated insured losses are less than the industry aggregate amount of $27.5 billion. Insurers that responded that they preferred that no change be made to the program trigger cited concerns about capacity limitations, increased terrorism insurance premiums, and an increase in the cost for terrorism reinsurance if the trigger were increased. Most of the insurers said that increases to the current deductible (20 percent of previous year’s direct earned premium) or private-sector coshare (currently at 15 percent) could affect insurer capacity and pricing. For example, insurers commented that an increase in either of these parameters would result in their companies reevaluating their risk, and likely reducing their capacity or increasing policyholders’ premiums. One insurer said that it had adjusted its terrorism risk-management program according to the current program, and that any increases almost certainly would result in the company taking risk-mitigation actions, including reducing terrorism exposures, to offset the increased risk to the company’s surplus. In addition, insurers stated that increasing the deductible or private-sector coshare would bring many companies under rating agency scrutiny for risk concentrations, which likely would result in industry-wide reductions in terrorism exposure. However, some insurers (3 of 15) told us that their companies could absorb a higher deductible amount, including one insurer that told us its company could absorb an increase in the deductible up to 29 percent. However, this same insurer cautioned that such an increase likely would result in increased premiums for terrorism coverage and decreased take-up rates. Insurers also expressed concerns about impacts on their solvency if the deductible or coshare percentages were increased. For example, insurers commented that such increases could affect rating agency assessments of companies’ financial strength. Representatives of A.M. Best told us that they use a stress test of different scenarios to measure insurers’ financial strength and notified 34 insurers their ratings could be negatively affected without a sufficient action plan as a result of failing the stress test. Insurers told us that increasing the deductible or private-sector coshare— and thus the amount of losses insurers would be responsible for paying— could adversely affect insurers’ liquidity and solvency in the event of large terrorism losses given the levels of surplus available from which to pay these losses. Industry participants consider deductible in relation to surplus as a metric to help understand how much of the company’s surplus would be at stake to pay the TRIA deductible amount in the event of a certified act of terrorism. (Insures also must have surplus available to cover unexpectedly large losses in all other lines of insurance they underwrite.) We found the TRIA deductible has generally represented an increasing portion of insurers’ surplus. Under the current program parameters, in 2012 the industry-wide TRIA deductible made up approximately 17 percent of estimated surplus of insurers potentially exposed to terrorism risk. Deductibles remained at 15 percent or higher of estimated surplus since 2005 (see fig. 8). Smaller insurers’ surplus would be affected more than larger insurers’ surplus in the event of a large terrorism loss. For example, according to our analysis of 2012 SNL Financial insurance data, on average, smaller insurers’ TRIA deductible amounts made up 23 percent of surplus compared with 12 percent for larger insurers (that is, the 10 largest commercial property and casualty insurers in TRIA-eligible lines). However, some larger insurers’ surplus also would be at heightened risk. For example, the TRIA deductible amounts represented from 7 to 19 percent of surplus of larger insurers. If the deductible was increased to 35 percent, surplus at stake (using 2012 data and holding the estimate for surplus constant) would nearly double to 30 percent, greatly increasing the possibility of insurer insolvencies due to certified terrorism losses. In contrast to insurer responses on the program deductible and coshare percentages, some insurers told us that the industry could absorb an increase to the industry aggregate retention amount. According to responses from our questionnaire, 7 out of 15 insurers said the industry aggregate retention amount should stay the same and 5 said it could be increased. Two insurers that said increasing the retention amount was reasonable because the industry has grown. For example, one insurer commented that the $27.5 billion amount was roughly based on 20 percent of industry premiums for TRIA-eligible lines in 2006. This insurer stated that because of growth in premiums, the insurance industry was capable of assuming a higher aggregate retention. Another insurer commented that surplus for the property and casualty industry has grown by approximately 20 percent since the 2007 reauthorization; therefore, the insurance industry might be able to absorb an increase in the amount based on the growth in surplus, which would be approximately $33 billion. However, 7 insurers reported that they preferred to maintain the current industry aggregate retention amount and most of those insurers cited concerns about the impact a higher retention amount would have on policyholders, due to the surcharges that would be added to policyholder premiums in the event of recoupment. One industry participant noted that according to experience in other lines of insurance, any surcharge that resulted in a premium increase of more than 2 percent might result in policyholders deciding not to purchase this coverage. Changes to program parameters not only would affect insurers but also estimates for fiscal exposure under TRIA. The legal commitment to pay a share of the losses when a certified terrorist attack occurs makes the program an explicit fiscal exposure for the U.S. government. The amount of federal spending resulting from this exposure depends on the extent of covered losses incurred as a result of a certified attack. Because the potential amounts of fiscal exposure and loss sharing would depend on the specifics of a certified act of terrorism, we developed illustrative examples to help demonstrate estimated changes in the magnitude of fiscal exposure when the deductible, coshare, or industry aggregate retention amounts were individually changed. We found that increasing the insurer deductible, coshare, or aggregate retention amount could reduce the government’s fiscal exposure in certain situations (see fig. 9). More specifically, as the deductible or coshare percentages increase, the government’s overall share of losses decreases, but only when the private sector’s share of losses exceeds $27.5 billion (because of mandatory recoupment). Increasing the industry aggregate retention amount would have a greater impact on reducing fiscal exposure than increasing either the deductible or coshare percentages by certain specified amounts (see fig. 10). The potential reduction to federal exposures was most pronounced in our scenario with a $50 billion loss and an increased retention amount. Such a scenario would approximate current-dollar losses similar to those that resulted from the September 11, 2001, terrorist attacks. Potentially, every $1 increase in the retention amount can result in an equal $1 decrease in federal exposure, when the insured losses are more than the industry aggregate retention amount of $27.5 billion. The insurers’ share of losses increases with any decrease in federal fiscal exposures. According to this $50 billion loss scenario, under the current program parameters the government’s share of losses after mandatory recoupment would be $23 billion. If the industry aggregate retention amount were increased to $35 billion, as suggested by increased surplus levels in the industry, federal exposure could decrease to $15 billion (see fig. 10). To achieve similar levels of reduction in the government’s share of losses, the deductible would have to be raised from 20 to more than 35 percent. There was no observable change to federal exposure when the coshare was increased in this $50 billion loss example because of the mandatory recoupment amount. The impact of changing the industry aggregate retention amount compared to changes to the deductible or coshare is even more evident under our $75 billion loss scenario (see fig. 11). As we have previously reported, insurers generally have attempted to limit their exposure to nuclear, biological, chemical, or radiological (NBCR) risks by excluding nearly all NBCR events from property and casualty coverage. According to industry representatives, property and casualty insurers believe they have excluded NBCR coverage by interpreting existing exclusions in their policies to apply to NBCR risks, but some of the exclusions could be challenged in courts. In 2004 Treasury issued an interpretive letter that clarified that the act’s definition of insured loss does not exclude losses resulting from NBCR attacks or preclude Treasury from certifying a terrorist attack involving NBCR weapons. According to Treasury’s interpretive letter, the program covers insured losses from NBCR events resulting from a certified act of terrorism, if the coverage for those perils is provided in the policy issued by the insurer. While Treasury has confirmed that NBCR losses would qualify for loss sharing under TRIA, we found insurers generally excluded coverage for NBCR risks. Several insurers told us that they do not underwrite NBCR risks because of the lack of data to assess frequency and severity, which makes it difficult to determine an accurate price for the coverage. One insurer told us that NBCR events are uninsurable because of the scale of losses, difficulties in modeling, and the deliberate nature of the acts. Several insurers also told us they generally exclude NBCR risk where state law permits. However, insurers are generally required to cover NBCR losses for workers’ compensation policies, and may provide NBCR coverage in other limited circumstances. For instance, two insurers we interviewed provide NBCR coverage in limited circumstances. One insurer told us that the company covers NBCR risks in its general liability policies, and another said some of its environmental policies include NBCR coverage. As stated previously, workers’ compensation insurers generally include NBCR coverage because states generally prohibit the exclusion of any peril for workers’ compensation. Also, certain states require insurers to cover fire following an event, regardless of the cause of the fire. Thus, an NBCR event that leads to a fire may activate a fire policy providing coverage to a policyholder. Other options for NBCR coverage exist. For instance, NBCR coverage can be obtained through the use of captive insurers accessing the TRIA program and we previously reported that some large businesses elected this coverage route. There also is a limited stand-alone terrorism insurance market for NBCR, but high prices have prevented most businesses from purchasing coverage. And, although reinsurance companies traditionally excluded NBCR, about two-thirds of reinsurance companies offered some coverage for NBCR events, according to a 2010 survey. Some insurers told us that expanding TRIA to require insurers to make NBCR coverage available would result in significant disruptions to the market. Some insurers said that an NBCR event could render the insurance industry insolvent. Another insurer told us that underwriting NBCR risks would decrease its capacity to underwrite other types of insurance. Several insurers did not support changing TRIA to require coverage for NBCR events because, in their opinion, NBCR was not an insurable risk. One insurer said effective underwriting and pricing of NBCR exposure was not possible and attempting to do so would be contrary to basic principles of insurance underwriting and pricing. One company told us that significant market disruptions would occur if NBCR coverage were mandatory. Additionally, one insurer told us that reinsurance capacity for NBCR risks was minimal and, as a result, the ability of any insurer to offer NBCR coverage was limited. According to responses to our questionnaire, 10 of 15 insurers said that they did not favor establishing a prefunding mechanism (such as a pool) in place of the current postfunding mechanism under TRIA (recoupment). For example, a prefunding mechanism could potentially allow insurers to set aside tax-deductible reserves for terrorist events or the creation of risk-sharing pools. One insurer supported a prefunding mechanism and four other insurers did not provide comments on the advantages and disadvantages associated with a prefunding mechanism. Insurers not in favor of a mechanism to charge for federal reinsurance cited a number of obstacles that would need to be considered, such as the following: Increased administrative costs. Several insurers commented that a prefunding mechanism would require additional resources and staff to administer. For example, one insurer said that Treasury would have to expand its staff and augment their expertise to administer a prefunding mechanism. This insurer also noted that Treasury would have to collect and analyze exposure and other pricing data, utilize terrorism risk models, obtain staff with actuarial and underwriting expertise, conduct audits of insurers, and manage a billing process. Difficulties funding a reinsurance pool. Insurers noted challenges in accumulating sufficient reserves for a pool, and effectively managing the pool. For example, one insurer commented that the federal government likely would not be able to accumulate enough funds for such a pool. Insurers also cited other challenges involved in prefunding federal reinsurance, such as decreases in the purchase of terrorism insurance due to its increased cost or lack of coverage after an event depletes the fund. Challenges in estimating the frequency of terrorist attacks. The unpredictability of terrorist attacks and the inability to effectively underwrite against terrorism risk would need to be considered. One insurer commented that any prefunding mechanism would be purely speculative and contribution amounts would bear little relationship to the likely losses from an event. Additionally, because it is difficult to assess the potential frequency and severity of a terrorism event (key components in pricing or funding for risk), insurers commented that postfunding (recoupment) was the preferable approach for terrorism risk. Increased cost to policyholders. Insurers commented that if a prefunding mechanism were established, it likely would result in increased costs, such as administrative costs and increased costs to policyholders. For example, one insurer told us that as users of such a mechanism, its costs for such a mechanism would be passed to policyholders. This insurer also noted that additional costs may cause policyholders to forgo terrorism coverage; therefore, in their view a prefunding mechanism could contravene the purpose of TRIA—to encourage the availability and affordability of terrorism coverage for policyholders that want to insure against terrorism exposure. Some uncertainty exists in the market about whether TRIA covers cyber terrorism risks. According to industry participants, cyber attacks could involve a wide spectrum of potential threats that could impact property, critical communications, and utility and transportation infrastructure, among other unconventional threats. Industry participants also pointed out that cyber events have the ability to impact numerous lines of insurance coverage. While TRIA does not explicitly exclude coverage of cyber risks (or other specific perils) it also does not explicitly cover it. Program guidance and other official communications have been silent on this point—thus, allowing for confusion or uncertainties about coverage. According to our questionnaire, 8 of 15 insurers considered losses resulting from cyber terrorism as covered by TRIA as long as the insurer underwrote cyber terrorism coverage as part of the underlying policy. However, our questionnaire also revealed uncertainty about such TRIA coverage. For example, 7 other insurers said that, based on their understanding of TRIA, they did not know if losses resulting from cyber terrorism would be covered by TRIA. Insurers commented that more clarity about the treatment of cyber terrorism under the program would be helpful to eliminate any uncertainty in the insurance industry about coverage of this type of risk. Insurers also commented that because cyber terrorism is an emerging risk there was some uncertainty about what the term encompassed. For example, one insurer noted that there is no statutory definition for cyber terrorism and that depending on the definition of cyber terrorism the program may or may not be triggered; therefore, the insurer said that a consistent definition would be needed. Another insurer said it was unsure if the industry has had a consistent approach to defining and covering cyber terrorism in policies and suggested that a technical working group or clarification from Treasury could help make this clearer. Some insurers, industry associations, and brokers also noted that because cyber terrorism risk is a new and evolving risk that recently has come into focus, some clarification about how this risk would be covered under TRIA would be helpful and could help increase capacity in the market. For example, in its comments to PWG, Aon noted a lack of cyber insurance capacity in certain industries, such as large energy, utility, gas, and water entities. In addition, in their 2013 statement to PWG, the American Academy of Actuaries stated that cyber terrorism is a significant risk, and that clarification is important because of the nation’s ever-increasing dependence on technology, including for commerce and business administration. As discussed earlier in this report, Treasury issued an interpretive letter in 2004 that clarified whether losses from an NBCR event would be covered under TRIA. In addition, our work on fiscal exposures demonstrates the importance of complete information about fiscal exposures. Specifically, a more complete understanding of the sources of fiscal exposure and the way they change can provide enhanced control and oversight over federal resources. Treasury acknowledged that there has been growth in the cyber insurance market. Treasury officials said that they have not issued any clarifications because clarification was unnecessary and explicitly listing TRIA-covered events may create unnecessary coverage disputes. While TRIA does not explicitly prohibit coverage of cyber terrorism risk, neither does it explicitly allow it. However, without clarification of the coverage of cyber risks, some insurers may not offer cyber coverage or may not explicitly exclude coverage. As a result, coverage may not be as available. Additionally, inclusion of cyber risks affects the government’s fiscal exposure under TRIA, and without gathering information from the industry to help provide clarity surrounding the definition and coverage of this risk, the federal government would not have an understanding of the potential impact of losses from a cyber attack. Congress enacted TRIA and its reauthorizations to help ensure the availability and affordability of insurance for terrorism risk and provide a transitional period in which the private insurance market could determine how to model and price terrorism risk. However, Treasury has not collected comprehensive data directly from insurers. Federal internal control standards state that agencies should identify and obtain relevant and needed data to be able to meet program goals. Obtaining comprehensive data is necessary to thoroughly analyze the market. While Treasury stated that the information available from other sources has been sufficient for purposes of responding to TRIA’s reporting requirements, more data and periodic assessments of the market would help Treasury assess whether the program goals of ensuring the continued widespread availability and affordability of terrorism risk insurance and addressing market disruptions are being met, and advance decision making about any potential program changes and the impact of those program changes on the market. Moreover, Treasury has performed limited analyses of the potential amount of fiscal exposure the program represents. While no terrorist attacks have triggered TRIA, the program still creates an explicit fiscal exposure for the government because the government is legally required to make payments for certified terrorist events. According to industry best practices, analysis of exposures is important for understanding the financial risks of a potential terrorist attack. In addition, federal internal control standards state the importance of analyzing risks to programs, and our prior work on fiscal exposures highlights how estimates could be developed to better understand fiscal exposures. By enhancing its data analyses, Treasury would be in a better position to estimate the amount of fiscal exposure under various scenarios of potential terrorist attacks and to inform Congress of the potential fiscal implications of any program changes. By better understanding fiscal exposure, Treasury can aid Congress in monitoring the financial condition of the program and its potential impact on the federal budget over the longer term. In the last few years, demand for terrorism insurance may have leveled off, as indicated by available data. However, insurers are concerned about how a new type of terrorist threat, cyber attacks, would be treated under the program and some industry sectors have experienced difficulty obtaining coverage. Some terrorism risk insurers told us they do not know whether losses resulting from cyber terrorism would qualify for coverage under TRIA, which may impact their decision to cover it. In the past, Treasury issued an interpretive letter to clarify the treatment of NBCR risks under the program. TRIA is silent on cyber threats. Clarification of the coverage of cyber risks could spur additional capacity in the market for this type of risk. Additionally, clarification of cyber risks could help estimates of the government’s fiscal exposure more accurately reflect the potential risks. We recommend that the Secretary of the Treasury take the following three actions: Collect the data needed to analyze the terrorism insurance market. Types of data may include terrorism coverage by line of insurance and terrorism insurance premiums earned. In taking this action, Treasury should determine whether any additional authority is needed and, if so, work with Congress to ensure it has the authority needed to carry out this action. Periodically assess data collected related to terrorism insurance, including analyzing differences in terrorism insurance by company size, geography, or industry sector; conducting hypothetical illustrative examples to help estimate the potential magnitude of fiscal exposure; and analyzing how changing program parameters may impact the market and fiscal exposure. Gather additional information needed from the insurance industry related to how cyber terrorism is defined and used in policies, and clarify whether losses that may result from cyber terrorism are covered under TRIA––clarification could be made through an interpretative letter or revisions to program regulations, some combination or any other vehicle that Treasury deems appropriate. We provided a draft of this report for review and comment to the Department of the Treasury (Treasury), including the Federal Insurance Office, and National Association of Insurance Commissioners (NAIC). We received written comments from Treasury, which are presented in appendix II. NAIC did not provide written comments. Treasury and NAIC also provided technical comments, which we incorporated as appropriate. In its written comments, Treasury agreed with our recommendations on collecting and assessing data to analyze the terrorism insurance market, but with respect to our recommendation about clarifying guidance on coverage of cyber terrorism, said that it did not believe advance determination of such an event would be helpful or appropriate. Treasury agreed to collect the data needed to analyze the terrorism insurance market and to periodically assess these data for certain purposes, such as for differences in terrorism insurance by company size, geography, or industry sector and effects on the market for terrorism risk insurance of changing program parameters. Treasury also noted that collecting and analyzing market data would not provide a basis to meaningfully estimate the fiscal exposure of the government under the program and that the amount of federal payments to insurers resulting from acts of terrorism hinges on multiple variables that cannot be predicted with precision. As discussed in the report, limitations of modeling the probability of this type of risk exist, but we maintain that estimating the potential magnitude of fiscal exposure under various hypothetical scenarios of terrorist attacks could help inform Congress of the potential fiscal implications of any program changes, including changes that could limit federal fiscal exposure. Further, accounting for insurers’ deductibles and recoupment in these estimates could aid Treasury in monitoring the potential impact of the program on the federal budget over the longer term. In light of Treasury’s response, we have revised our draft recommendation to clarify what types of analyses to conduct and to specify that illustrative examples of different terrorist attack scenarios could be used for the analysis of the potential magnitude of fiscal exposure. Regarding our third recommendation that Treasury should clarify whether losses that may result from cyber terrorism are covered under TRIA, Treasury stated that TRIA does not preclude federal payments for a cyber terrorism event if it meets the statutory criteria for an act of terrorism. Treasury also stated that while the agency will continue to monitor this issue as it develops and collect applicable market data as necessary, it does not believe that providing an advance determination of when a cyber event is an act of terrorism would be helpful or appropriate. As discussed in the report, clarification of whether losses from a cyber terrorism event could be eligible for coverage under TRIA is needed because of existing uncertainties regarding this coverage. Such clarification would not necessarily require an advance determination of what types of cyber events would qualify as acts of terrorism under the statute. As we discussed in the report, 7 of the 15 insurers responding to our questionnaire did not know if losses resulting from cyber terrorism would be covered by TRIA. In addition, a large broker noted a lack of cyber insurance capacity in certain industries. Due to the uncertainties that exist about what this emerging risk encompasses and whether losses resulting from a cyber terrorism event would qualify for coverage under TRIA, clarification would be helpful to spur additional market capacity for this risk, consistent with the program’s goals of ensuring availability and affordability of terrorism risk insurance. In light of Treasury’s response, we revised our draft recommendation to specify the type of information to be gathered from the industry to help inform Treasury’s decision regarding guidance on cyber terrorism. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to Treasury, NAIC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of our report were to (1) evaluate the extent of available data and the U.S. Department of the Treasury’s (Treasury) efforts in determining the government’s exposure for the terrorism risk insurance program, (2) describe changes in the terrorism insurance market since 2002, and (3) evaluate potential impacts of selected changes to the Terrorism Risk Insurance Act (TRIA). For each of our objectives, we reviewed relevant laws, particularly the Terrorism Risk Insurance Act of 2002, its amendments, and implementing regulations. We also reviewed relevant literature and past reports on terrorism risk, including the Congressional Research Service’s reports on the current program and recent legislation. Additionally, we reviewed reports our previous reports on TRIA and updated our work accordingly. We reviewed reports to Congress from the President’s Working Group on Financial Markets (PWG). We also reviewed comments submitted from the public to PWG for their most recent TRIA report. We interviewed federal officials and staff from Treasury, the National Association of Insurance Commissioners, and the Congressional Budget Office. We also interviewed several industry participants (such as representatives from insurance trade associations, terrorism risk modeling firms, rating agencies, and insurance brokers) to obtain information for all our objectives. Because detailed information on terrorism insurance was not publicly available, we developed a questionnaire to solicit information applicable to all our objectives from 15 insurers from which businesses had purchased terrorism coverage in 2012. The 15 companies—10 of the largest U.S. commercial property and casualty insurers in lines subject to TRIA (by premium volume) and 5 additional insurers recommended to us by an insurance broker, trade association, or both—represented roughly 40 percent of the commercial property and casualty market (by direct earned premium volume for 2012), according to SNL Financial data. We included questions about coverage, premium volume, and underwriting decisions. We also obtained views on potential modifications to TRIA and how they might affect the market, which we took into account when developing questions. We worked with a GAO specialist to draft questions. To minimize errors arising from differences in how questions might be interpreted and reduce variability in responses, we conducted pretests with two different organizations in January 2014. On the basis of feedback, we revised the questionnaire to improve organization and clarity. We then sent the questionnaire to the 15 insurers in January 2014. Some questions required close-ended responses, such as providing a specific percentage or checking boxes. Other questions were open- ended, allowing the insurers to provide more in-depth responses on how changes to TRIA might affect them. Since the 15 insurers we contacted were selected on a nonprobability basis, the findings are only applicable to the 15 companies that we interviewed and do not generalize to even the commercial property and casualty insurers that sold terrorism coverage in 2012. They do offer insight into how some private market insurers currently view parameters and topics under consideration related to government-backed terrorism insurance. We took steps to verify the information gathered in the questionnaire and analyzed the results. We initially reviewed returned questionnaires to ensure company representatives had provided complete and consistent responses. After we received the completed, written responses, we held teleconferences with representatives from each insurer to discuss, clarify, or amend responses, as appropriate. We aggregated the responses and presented summary information in this report. We used standard descriptive statistics to analyze quantitative responses and performed content analysis on the open-ended responses to identify common themes. Where possible, we corroborated insurers’ responses with information or analysis from other sources. On the basis of our questionnaire design and these follow-up procedures, we determined that the data used in this report were sufficiently reliable for our purposes. Finally, GAO data analysts independently verified all data analysis programs and calculations for accuracy. To evaluate the extent of comprehensive data, we obtained and reviewed information on the availability of data on the terrorism insurance market and Treasury’s efforts to help estimate federal exposure under TRIA. We reviewed previous reports to Congress from PWG and Treasury on TRIA. In addition, we interviewed and obtained information from officials and staff at Treasury’s Federal Insurance Office and Terrorism Risk Insurance Program Office. To obtain information on Treasury’s efforts in determining federal exposure, we reviewed a study from the Insurance Services Office, Inc. that provided an estimate of average annual losses under TRIA in any given year. We also reviewed documents from Treasury, and information on exposure analyses from risk modeling companies, such as RMS and AIR Worldwide. We spoke to insurers, brokers, and terrorism risk modeling firms to better understand how they analyze information about terrorism exposures and obtain information about the industry’s best practices. Finally, we reviewed our work on fiscal exposures to help determine any explicit exposures created by TRIA. For example, a certified terrorism attack would represent an explicit exposure because some payment by the federal government would be legally required. To describe changes in the terrorism risk insurance market, we obtained and analyzed available information on premiums, capacity, pricing, and take-up rates (the percentage of businesses buying terrorism coverage). We obtained information on terrorism insurance premiums from 2004 through 2012 from A.M. Best, an insurance rating agency, which had collected this information as part of its annual Supplemental Rating Questionnaire. A.M. Best provided aggregated data to us. To compare terrorism premiums with premiums collected for other insurance lines, we obtained data from SNL Financial on premiums earned for commercial property and casualty insurers for all commercial lines and for lines subject to TRIA. Additionally, we obtained capacity, pricing, and take-up rate information from 2003 through 2013 from two insurance brokers— Marsh and McLennan (Marsh) and Aon—as available. Marsh provided nationwide pricing and take-up rate data, while Aon had information on capacity. Marsh and Aon are the largest business insurance brokers in the United States. We interviewed representatives from Marsh and Aon to ensure we had a clear understanding of which insurers the data represented and how the brokers obtain information from their data systems. All data presented in this report from Marsh and Aon solely represent their clients, cannot be generalized to the entire market, and are attributed accordingly. Based on this, we determined that the data used in this report from the insurance brokers were sufficiently reliable for our purposes. To obtain information on the reinsurance market and insurance-linked securities, we interviewed representatives from the Reinsurance Association of America and Fermat Capital Management. We also reviewed reports from industry participants, such as Swiss Re, Munich Re, and Aon Benfield on the status of the reinsurance and insurance-linked securities markets. Finally, as part of our questionnaire, we asked insurers whether they purchased reinsurance for terrorism risk and how reinsurance purchasing patterns have changed over the last decade. To evaluate the potential impact of selected changes to TRIA, we identified certain changes to the program’s parameters based on our analysis of the program’s structure, review of relevant literature, testimonies from congressional hearings, and prior changes made in the TRIA reauthorizations. On the basis of these, we asked the 15 insurers who received our questionnaire for their input on how selected changes would affect the insurance market for terrorism risk. For example, we asked insurers to categorize the greatest changes to program parameters and how such changes could affect each company’s capacity, pricing, and take-up rates. We asked insurers to categorize the greatest change to the aggregate industry retention amount (currently, $27.5 billion) that, in their opinion, the industry could handle. We also asked insurers if their companies underwrote nuclear, biological, chemical, or radiological risks or cyber risks and what metrics or factors Congress would need to consider if changes were made to TRIA related to losses as a result of these types of risks. Finally, we asked insurers to indicate what metrics and factors Congress would need to consider if the federal government were to establish a prefunding mechanism (in place of the current postfunding mechanism—recoupment). We obtained information from A.M. Best on estimated policyholder surplus of insurers potentially exposed to terrorism from 2003 through 2012, to compare, industry-wide, the TRIA deductible amounts with estimates of policyholder surplus. We also compared the average TRIA deductible as a percentage of estimated policyholder surplus, for the 10 largest commercial property and casualty insurers (in insurance lines subject to TRIA), with all other insurers, as well as the range for the 10 largest insurers. We performed this analysis to help determine whether any differences existed for large versus small insurers in terms of their TRIA deductible amounts as a percentage of surplus. We also developed examples to illustrate the impact of certain changes to the current program on the federal government’s fiscal exposure. To develop the examples, we consulted with experts from terrorism risk modeling firms and reviewed relevant literature. Additionally, to develop the examples, we made assumptions for the number of insurers affected, and their direct earned premiums, market share, insurance lines, and total loss amount. We used SNL Financial insurance data and information from A.M. Best to help develop these assumptions. We compared what federal losses would be under the current program parameters (status quo) with those after making changes to the current program. For example, we compared changing the deductible from 20 to 35 percent and changing the insurers’ coshare from 15 to 30 percent, in intervals of 2 to 3 percent. We analyzed the federal share of losses for variously sized terrorist attacks ($25 billion, $50 billion, $75 billion, and $100 billion) before and after recoupment, using the current law’s aggregate industry retention amount ($27.5 billion). To help assess the reliability of our analysis, we verified that our results were consistent with a model developed by the Reinsurance Association of America. We conducted this performance audit from July 2013 through May 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jill Naamane (Assistant Director); William Chatlos; Robert Dacey; Rachel DeMarcus; Patrick Dynes; Beth Ann Faraguna; Isidro Gomez; DuEwa Kamara; Shamiah Kerney; May Lee; Marc Molino; Erika Navarro; Susan Offutt; Barbara Roesmann; Jessica Sandler; Melvin Thomas; and Frank Todisco made key contributions to this report. | Congress passed TRIA in 2002 to help ensure the availability and affordability of terrorism insurance for commercial property and casualty policyholders after the September 11, 2001, terrorist attacks. TRIA was amended and extended twice and currently will expire at the end of 2014. Under TRIA, Treasury administers a program in which the federal government and private sector share losses on commercial property and casualty policies resulting from a terrorist attack. Because the federal government will cover a portion of insured losses, the program creates fiscal exposures for the government. GAO was asked to review TRIA. This report evaluates (1) the extent of available data on terrorism insurance and Treasury's efforts in determining federal exposure, (2) changes in the terrorism insurance market since 2002, and (3) potential impacts of selected changes to TRIA. To address these objectives, GAO analyzed insurance data, information from 15 insurers selected primarily based on size of insurer, interviewed Treasury staff and industry participants, updated prior work, and developed examples to illustrate potential fiscal exposure under TRIA. Comprehensive data on the terrorism insurance market are not readily available and Department of the Treasury (Treasury) analysis to better understand federal fiscal exposure under various scenarios of terrorist attacks has been limited. Treasury compiled some market data from industry sources, but the data are not comprehensive. Federal internal control standards state that agencies should obtain needed data and analyze risks, and industry best practices indicate that analysis of the location and amount of coverage helps understand financial risks. However, without more data and analysis, Treasury lacks the information needed to help ensure the goals of the Terrorism Risk Insurance Act (TRIA) of ensuring the availability and affordability of terrorism risk insurance and addressing market disruptions are being met and to better understand potential federal spending under different scenarios. Available data show that terrorism insurance premiums and other market indicators are stable. For example, estimated terrorism insurance premiums have been relatively constant since 2010 (see figure). Insurers told GAO that, in 2012, terrorism insurance premiums made up on average less than 2 percent of commercial property and casualty premiums. According to industry participants, prices for terrorism coverage have declined, the percentage of businesses buying coverage seems to have leveled recently, and insurers' ability to provide it has remained constant. Insurers and other industry participants cited concerns about the availability and price of terrorism coverage if TRIA expired or was changed substantially, but some changes could reduce federal fiscal exposure. Some insurers GAO contacted said they would stop covering terrorism if TRIA expired. Changes such as increasing the deductible or threshold for required recoupment of the government's share of losses through surcharges on all commercial policyholders could reduce federal fiscal exposure. Most insurers GAO contacted expressed concerns about solvency and ability to provide coverage if their deductible or share of losses increased. Insurers were less concerned about increases to the thresholds for government coverage to begin or to the required recoupment of the government's share of losses. Treasury should collect and analyze data on the terrorism insurance market to assess the market, estimate fiscal exposure under different scenarios, and analyze the impacts of changing program parameters. Treasury agreed with these recommendations. |
Through WIA, the Congress sought to replace the fragmented training and employment system that existed under the previous workforce system. Among other things, WIA streamlined program services at one-stop centers, offered job seekers the ability to make informed choices about training, and provided for private-sector leadership to manage this new workforce development system. To ensure better integration of employment and training services at the local level, WIA imposed requirements on at least 17 programs administered by four federal agencies. These requirements included, among others, making core employment and training services available through the one-stop centers, providing access to the programs’ other services to those eligible, and supporting the one-stops’ establishment and operation. As shown in table 1, these programs represent a range of funding levels, from $2.4 billion for the Vocational Rehabilitation Program to $55 million for Native American employment and training programs. The programs also represent various target populations. For example, while many of the programs serve either low-income or otherwise disadvantaged or unemployed individuals, WIA’s Adult and Dislocated Worker programs can serve any individual 18 or older, as can Wagner- Peyser’s Employment Service (Employment Service). In contrast, Education’s Vocational Rehabilitation Services program can only serve disabled individuals and even then prioritizes which of those it can serve. These programs also represent a range of service-delivery methods. Many of these programs’ services are administered by public agency personnel (such as those from state labor or education departments). Other programs are administered by, among others, nonprofit or community- based organizations, unions, Indian tribal governments, and community development corporations. Several of these programs consist of block grants that are provided to states and localities for a variety of efforts, which may include employment and training services. Although many of the programs provide for training, such as WIA’s Adult and Dislocated Worker programs, others, such as veterans’ employment and training programs, must work with other programs to obtain training for their participants. While WIA created the establishment of one-stops, it did not prescribe their structure or specific operations. However, in guidance published in June 2000, Labor identified a range of models that could be used to comply with the law’s requirements. These models included simple collocation of program staff at the one-stops with coordinated delivery of services, or electronic data sharing between partners’ existing offices and the one- stops. According to Labor and others, however, the vision for future participation by partners in one-stop systems is “full integration.” Labor has defined full integration as all partner programs coordinated and administered under one management structure and accounting system, offering joint delivery of program services from combined resources. WIA gave local areas discretion to determine the means by which partners would participate in providing core services and support for the one-stops’ operations. The arrangements were supposed to be resolved in a memorandum of understanding between the local workforce investment boards and each partner. As an example of coordinated delivery systems, partners could develop contractual agreements with other partners to provide core services, which could include referral arrangements. WIA also provided a great deal of flexibility as to how partners could support the one-stops. For example, WIA allows making financial contributions (for example, paying rent for staff collocated at the one-stop), or providing equipment or shared services (for example, teaching a class, or greeting individuals who enter the one-stop). In addition to requiring the mandatory partners to provide their core services at the one-stop, WIA changed the way partners served job seekers. WIA initiated a sequencing of services for adults and dislocated workers to ensure that they were receiving the requisite amount of services needed to enter the workforce, and that funds for more intensive services or training were targeted to those who needed them most. Accordingly, WIA required that anyone coming into the one-stop would first receive only core services to aid them with their job search activities. If these efforts were unsuccessful in helping the job seeker obtain or retain a job that allows for self-sufficiency, then he or she could receive intensive services. These services are conducted by one-stop staff to help the job seeker find, successfully compete for, and retain a job. Intensive services can include activities such as counseling, and in-depth skill assessment. Intensive services also include classes such as general equivalency diploma (GED), literacy, conflict resolution, and punctuality classes. If these activities still do not help the job seeker obtain and retain employment, then the individual may be eligible to receive occupational skills training. WIA allowed local discretion regarding how individuals would move from one level to the next among those three levels of services. According to Labor, individuals may receive the three levels of service concurrently and the determination that an individual needs intensive and/or training services can be made without regard to how long the individual has been receiving core services. One of the criticisms of past workforce systems was that few data were available on the impact that training had on a job seeker’s ability to obtain and maintain employment. Consequently, there is a requirement, specific to WIA’s Adult and Dislocated Worker programs, for individuals seeking jobs through WIA. WIA requires the collection of outcome data to be used to assess training providers’ performance and also to allow job seekers receiving training the ability to make more informed choices about training providers. Unlike prior systems, WIA allows individuals eligible for training under the Adult and Dislocated Worker programs to receive vouchers—called Individual Training Accounts (ITAs)—which can be used for the training provider and course offering of their choice, within certain limitations. Training provider participation under WIA’s Adult and Dislocated Worker programs centers on an eligible training provider list (ETPL). This list contains all training course offerings that are available to WIA-funded individuals eligible for training. Course offerings from most community colleges and other technical education providers are automatically qualified to be on the ETPL for 1 year, as long as providers submitted paperwork to each local area where they wanted their course offerings to be available. When WIA-funded individuals with ITAs enrolled in a course, the training providers would, to stay on the ETPL after the first year of initial eligibility, need to collect and report data on all the students enrolled in that course. The providers need to collect data on (1) completion rates, (2) job-placement rates, and (3) wages at placement. WIA also required, among other things, collection of retention rates and wage gains for participants funded under the Adult and Dislocated Worker programs for 6 months following their first day of employment. This procedure has to be repeated for any new course offering that training providers may want to place on the ETPL. To have course offerings remain on the ETPL after the 1-year initial eligibility period, training providers must meet or exceed performance criteria established by the state. For example, a state might determine that only training providers’ courses with an 80-percent-completion rate would be allowed to remain on the ETPL. If a course failed to meet that level, it would no longer be open to WIA-funded individuals. Labor’s final regulations allowed states to extend the initial eligibility period for up to an additional six months under certain circumstances. WIA called for the development of workforce investment boards to oversee WIA implementation at the state and local levels. At the state level, WIA required, among other things, that the workforce investment board assist the governor in helping to set up the system, establish procedures and processes for ensuring accountability, and designate local workforce investment areas. WIA also required that boards be established within each of the local workforce investment areas to carry out the formal agreements developed between the boards and each partner, and to oversee one-stop operations. According to Labor, there are 54 state workforce investment boards and approximately 600 local boards. WIA listed what types of members should participate on the workforce investment boards, but did not prescribe a minimum or maximum number of members. Also, it allowed governors to select representatives from various segments of the workforce investment community, including business, education, labor, and other organizations with experience in the delivery of workforce investment activities to be represented on the state boards. The specifics for local board membership were similar to those for the state. (See table 2.) Private-sector leadership and involvement on these boards was seen as crucial to shaping the direction of the workforce investment system. In that respect, WIA required that private-sector representatives chair the boards and make up the majority of board members. This would help ensure that the private sector would be able to provide information on available employment opportunities and expanding career fields, and help develop ways to close the gap between job seekers and labor market needs. Although state and local boards have some responsibility for implementing WIA, numerous public agencies and other entities in states and localities operate the various programs that are mandatory partners under WIA. WIA did not provide either the state or the local workforce investment boards with control over the funds for most mandatory partner programs. They only have limited authority concerning a portion of WIA funds designated for adult and youth activities and, even then, only under certain circumstances. WIA required that the mandatory partners provide core services through the one-stop, as well as support the one-stop’s operations. The mandatory partners are generally making efforts to participate in accordance with the requirements of WIA. However, the partners raised a number of concerns that affect the level and type of participation they are able to provide and may prevent them from achieving the vision of full integration of services. Specifically, partners expressed concerns that their one-stop participation could result in changes to their traditional service-delivery methods. These changes might adversely affect their ability to serve their target populations, lead them to serve individuals otherwise ineligible for their services, or unnecessarily strain their financial resources. Implementers acknowledged that WIA gave them the flexibility to address many of these individual concerns at the local level. However, they noted that their ability to establish and maintain effective one-stop operations is hampered when each partner has significant limitations affecting how they can participate and may be unwilling or unable to fully integrate services. Available guidance from responsible federal agencies has not adequately addressed many of these specific concerns, resulting in continued confusion or reluctance to participate in the one-stops. Many of the mandatory partners have raised concerns that altering their existing service-delivery methods to participate in the one-stops and respond to the vision of full integration could adversely affect the quality of services they provide to their target populations. Since the implementation of WIA, partners who serve special populations have repeatedly raised these concerns in comments to Labor and to their parent agencies. These issues were also raised in a study that found that Vocational Rehabilitation partners were concerned that one-stop facilities may not adequately accommodate the special needs of disabled participants who may require more specialized services, equipment, or personnel, such as staff who know sign language. As a result, even though Vocational Rehabilitation staff were present in some form (either through collocation or referral) at all of the nine one-stops we visited, Vocational Rehabilitation continued to maintain their own preexisting program offices to accommodate their eligible individuals’ special needs. Staff told us that because WIA did not require offices to close, they believed that it was prudent for them to maintain the existing service- delivery structures so as not to limit the quality of services for their eligible population. Other partners have said that they did not see how participation in the one-stop would benefit their eligible populations who were already receiving services through the existing structures. For example, California Department of Education officials told us that low-income and disadvantaged populations in California already have full access to the community college system at low or no cost, decreasing the incentive for partners providing services under Perkins and the Adult Education and Literacy Program to participate in the one-stops in that state. Other partners questioned the value of participation because of the type of individuals they serve or the method in which the services are provided. Across the nine one-stops we visited, there were programs, such as the Native American Program or the Migrant and Seasonal Farmworker Program, that may have had few eligible individuals in the area, which decreased the value of one-stop participation unless there was a critical mass of eligible individuals for them to serve at the one-stop. For example, for seven of the nine one-stops we visited, the Native American Program relied on referrals of potentially eligible individuals from other one-stop partners rather than providing staff to collocate at the one-stops. Other partners, such as those funded under the Community Services Block Grant or carrying out HUD’s employment and training activities, are only required to be involved if they offer employment or training services. This may explain why partners representing the Community Services Block Grant and HUD’s various workforce development initiatives were not present at three of the nine one-stops we visited. At four one-stops, these partners left information about their programs at the one-stop for individuals to access independently and/or had the one-stop staff direct individuals to the grantees’ programs located elsewhere. Additionally, according to HUD officials, in many cases, clients receiving HUD services, such as housing assistance, are located in centralized areas, such as subsidized housing projects. This means there are likely few potential HUD clients that would enter a one-stop not located at a housing project, and HUD clients located at housing projects would have little reason to go to the one-stop for services. Although state and local implementers reported that programs lack sufficient guidance addressing how one-stop participation will meet the needs of their eligible population, some have still found ways to encourage programs to participate. State and local implementers said that Labor’s and Education’s published guidance concerning how the programs can provide their core services has not sufficiently identified ways to address partners’ concerns about potential adverse effects on service to target populations. However, a private-sector consultant providing assistance to local areas said that in one local area, partners providing Vocational Rehabilitation services are willing to participate in the one-stop because staff became convinced that serving their eligible population there would improve the quality of service for disabled individuals. Rather than addressing partners’ concerns about the potential adverse effect their one- stop participation may have on their eligible populations, some state and local implementers have tried to encourage participation in one-stops by offering incentives. For example, one local area allows partners to use one-stop facilities to teach classes, while another allows partners to use the facilities to assess eligible individuals’ literacy levels. A number of partners with narrowly defined program requirements or special target populations have expressed concerns to their parent agencies and to us that altering traditional service-delivery methods to participate in the one-stops or respond to the vision of full integration could lead to a conflict with their own program’s requirements or commitments regarding which individuals are eligible for the services they offer. (See table 3.) As a result, even when programs met WIA’s requirements to provide core services at the one-stop, they focused on their own eligible populations. For the nine one-stops we visited, even though a majority of the partners were participating, only a few of them, such as Employment Service, and WIA’s Adult and Dislocated Worker programs, are authorized to serve a broad range of individuals who came into the one-stop for services. The others served the more limited number of individuals specifically eligible for their services. The latter partners also tended to provide support services such as rent, rather than provide a shared service, because they believed doing so would conflict with their programs’ mandates. Vocational Rehabilitation staff have raised concerns to both Education and to us about how they can participate in the one-stop without violating their program’s mandates. Vocational Rehabilitation staff serve disabled individuals, yet many who come into the one-stop are either not disabled or do not meet their order-of-selection requirements in which individuals with the most significant disabilities are afforded priority for services. As a result, they do not believe they can provide core services to everyone coming into the one-stop. They also believe their order-of-selection requirements make it difficult to provide shared services, such as providing initial intake or serving as a greeter, because an individual— even a disabled one—may not meet previously set order of selection requirements. Other partners told us that they believe that all disabled individuals should first be served by the Vocational Rehabilitation program. They said that in some one-stops, an individual with disabilities might be sent to the Vocational Rehabilitation staff only to be sent back to WIA staff for core services. In response to concerns raised by Vocational Rehabilitation staff, Education issued regulations reaffirming that Vocational Rehabilitation staff must participate in the one-stop and provide one-stop operational support services. However, the regulations also noted that such participation must be consistent with existing Vocational Rehabilitation programmatic requirements. The lack of explicit direction leads to continued confusion and a general hesitancy to conduct activities not normally provided in their existing offices. This may explain, why at the one-stops we visited where Vocational Rehabilitation staff were collocated, they focused on their eligible population only and did not provide even permissible shared services, instead generally providing rent as their support of the one-stop’s operations. Veterans’ staff have also voiced their concerns regarding the relationship between their program mandate and WIA. Partners providing veterans’ services were collocated at the nine one-stops we visited; however, the veterans’ staff at those one-stops said they could not provide shared services, such as initial intake, because that would mean serving the entire range of one-stop users, whereas veterans’ staff are only allowed to serve veterans. We were also told by local implementers that veterans’ staff may be unwilling to teach orientation or job preparation classes at the one-stop to any nonveterans, even if there were veterans participating in the classes. Labor officials with whom we spoke believed that it was permissible for veterans’ staff to teach such classes as long as the majority of students were veterans. However, the same officials said that having veterans’ staff serve nonveterans was a violation of the program’s mandate. In its comments to this report, Labor said that Veterans’ Employment and Training Service funding is provided to states to be used exclusively for services to veterans and that if services were to be provided to nonveterans, the funding connected with such service would be disallowed. Labor has not published adequate guidance to help staff resolve these specific issues. This may explain why there are varying degrees of participation in local one-stops by veterans’ staff. For example, we were told that there were one-stops where veterans’ staff provided services to support the one-stop’s operations, such as teaching classes attended by nonveterans. Adult Education and Literacy providers, who participated in all nine one- stops we visited, have also raised concerns about meeting both their own program commitments and WIA’s requirements for one-stop participation. WIA provides that, in competitively awarding funds to Adult Education and Literacy providers, a preference must be given to those providers that have a commitment to serve individuals in the community who are most in need of literacy services, including low-income individuals and individuals with minimal literacy skills. This means, in some cases, an individual at the one-stop needing literacy training may not meet the standards that Adult Education and Literacy providers apply to determine who will be given priority for services. As a result, the individual may be sent to Adult Education and Literacy for services, only to be sent back to WIA’s Adult program for services. Although both Labor and Education have emphasized that state and local partners must collaborate to identify and address literacy and other service needs in a community, neither agency has issued guidance to address those instances when such conflicts arise due to such lack of planning between Adult Education and Literacy and WIA. In some areas, partners tried to work around these limitations, such as by using WIA funds to obtain an outside tutor or other appropriate service for the individual. In other cases, Adult Education and Literacy charged a fee for services they provided to WIA clients, when those services were not consistent with service priorities. Education officials also advocated various partners’ jointly financing a separate staff person to perform greeter and initial intake services. Many of WIA’s mandatory partners also identified resource constraints that they believe affected their ability to participate in as well as fully integrate their services into the one-stops. The first issue was the overall funding levels. Several of the partners we interviewed said they were not provided additional funding, which would have enabled them to provide services at the one-stops in addition to covering the expenses associated with their existing offices. This funding would have also allowed the partners to devote significant resources to establishing sophisticated electronic links between existing offices and the one-stops. The participants in the GAO-sponsored symposium also identified insufficient funding levels as one of the top three implementation problems. Labor also found that in many states, the agencies that administer the Employment Service program had not yet been able to collocate with the one-stops, although Labor’s regulations indicate that this is the preferred method for providing core services. We were told by Employment Service officials and one-stop administrators we spoke to that this was often because they still had leases on existing facilities and could not afford to incur the costs of breaking those leases. Limited funding made it even more difficult to assign additional personnel to the one-stop or to devote resources to developing electronic linkages with the one-stop. In the states we visited, mandatory partners told us that limited funding was also a primary reason why even when they collocated staff at the one-stop, they did so on a limited or part-time basis. Resource limitations may help explain why, at the nine one-stops we visited, mandatory partners employed a wide range of methods to provide the required support for the operation of the one-stops. Across all the sites we visited, WIA’s Adult and Dislocated Worker programs and, across most sites, Employment Service, were the only partners consistently making monetary contributions to pay for the one-stops’ operational costs. Other mandatory partners tended to make in-kind contributions—for example, Perkins and Adult Education and Literacy partners provided computer or GED training. Pennsylvania, however, was able to encourage all of its partners to provide some type of financial support; while in California and Vermont, many partners were not required to provide any financial support. Mandatory partners also identified how restrictions on the use of their funds can serve as another constraint affecting both their participation in the one-stops and the opportunity for full integration. For example, some programs have caps on administrative spending that affect their ability to contribute to the support of the one-stop’s operations. For example, WIA’s Adult and Dislocated Worker programs have a 10-percent administrative cap that supports both the one-stops’ operation and board staff at the local level. According to a survey conducted for us by a national association, 61 of the 69 respondents stated that this cap limits their ability to serve both functions, especially given the funding limitations of other programs. In addition, Education reported that its regulations generally prohibit states from using Education funds for acquisition of real property or for construction. This means partners, such as those carrying out Perkins, cannot provide funds to buy or refurbish a one-stop. Moreover, Adult Education and Literacy and Perkins officials noted that they can only use federal funds for the purpose of supporting the one-stop under WIA. Because only a small portion of the funds they have available at the local level come from federal sources, their ability to contribute is further limited. Several of the partners reported to us, and to Labor and Education, that they are not sure how to define or account for allowable activities in the WIA environment. For example, partners said existing guidance from the Office of Management and Budget (OMB) and Labor might not address situations in which costs must be allocated across programs with different or competing missions. In that respect, several implementers said that if some programs are unwilling or unable to contribute, costs will tend to be shifted to the Adult and Dislocated Worker partners—the programs having the broadest mission of any partners at the one-stop and the greatest responsibility for ensuring their effective operation. OMB requires that all shared services be properly accounted for by programs, which means that if a partner dedicated a copy machine to the one-stop, and that copy machine was used by all partners, the partner providing the copy machine must be reimbursed by all of the partners using it to remain in compliance with OMB regulations. According to a number of partners, tracking that kind of information is very difficult in this shared environment. This may explain why, in most of the nine one-stops we visited, partners tended to bring and use their own administrative supplies and materials, and shared very few items. Partners have also stated that guidance from Labor does not provide adequate detail about how to account for personnel who, in the process of providing support services, may be providing services to potentially ineligible populations. For example, if Vocational Rehabilitation staff were willing to provide initial intake services at the one-stop, it is not clear how the time spent would be reported if no disabled individuals entered the one-stop. This may also help explain why, in most of the nine one-stops that we visited, only those partners with broad target populations provided shared services, such as intake. Labor has convened a one-stop workgroup that, according to Labor officials, plans to continue examining these issues further, and work with OMB to establish guidelines on what partners can and cannot do. In comments to us, Labor also reported that it has drafted a financial management technical assistance guide that provides information on financial and administrative requirements applicable to some of the Labor programs. It plans to finalize this guide and begin training in October 2001. Despite these problems, we found several local areas that were making efforts to compensate for funding limitations. For example, a number of one-stops in Pennsylvania have brought in additional paying partners, such as businesses and nonprofit entities, to provide funds to help support one- stop operations. In California and Vermont, officials are using various state sources of training funds to leverage WIA’s funds. In one of the states we visited, local areas made a decision to classify expenses associated with running the one-stop as programmatic rather than administrative so that the recorded administrative costs can be kept to a minimum. As a result of their experiences, state and local implementers have developed a number of ideas for actions that they believe could reduce what they see as programmatic and financial concerns affecting the level and type of partner participation at the one-stop (as shown in table 4). Although there was broad consensus among those we contacted that these concerns needed to be addressed, there was not consensus on how best to address these concerns, nor on how to maintain the flexibility that was key to WIA’s implementation. Some of the ideas include providing more specific guidance at the federal level to overcome these concerns, while others call for legislative and/or regulatory action. These actions include amending partners’ enabling legislation to mandate changes in their service-delivery methods, requiring additional partners to participate, or expanding the scope of partners’ allowable activities at the one-stop. WIA job seekers may have fewer training options to choose from because training providers are reducing the number of course offerings they make available under WIA. According to training providers, WIA’s data collection and reporting requirements are burdensome and they question whether it is worthwhile to assume this burden because so few individuals have been referred to them under WIA. Among the workgroups Labor has established is one to address training provider concerns, but the workgroup has not yet provided detailed guidance to states and localities. If these data collection and reporting requirements succeed in discouraging training providers from participating, WIA’s goal that job seekers receive enhanced choice in training options might be jeopardized. According to training providers and other state and local implementers we interviewed, WIA’s data collection and reporting requirements are burdensome for three reasons. First, providers have to collect data on a potentially large number of students. Second, there are problems with the methods available for collecting these data. Third, WIA data collection and reporting requirements are different from those of other programs for which training providers must also collect data. Moreover, training providers did not necessarily see the data they are required to collect as accurate and useful for assessing their performance. Training providers have voiced a number of concerns to us, and to Education, about the fact that the number of students for whom they must potentially collect data presents a significant burden for them. First, WIA requires that training providers report program completion, placement, and wage data for all students in a class, regardless of whether they were WIA-funded. In other words, if one student in a class of 100 was WIA- funded, the training provider would be required to provide data on all 100 students. WIA also requires training providers to report additional information on WIA-funded students within 6 months of completion of the class. Part of the burden perceived by training providers may stem from their belief that WIA required them to perform this 6-month followup on all of the students in a particular course. Although WIA did not require this type of followup, it did provide the Governor, or the local board, with the option of requiring a provider to submit this additional information. WIA further provided that if such a request imposed extraordinary costs on providers, the Governor or the local board should provide access to cost- effective methods for the collection of this information, or supply additional resources to the provider to aid in the collection. Second, training providers have reported that the burden associated with collecting these data raises concerns. WIA did not specify how training providers would collect or report this information. In a number of states, training providers were providing student information, such as social security numbers (SSNs), to state agencies responsible for WIA implementation, such as state departments of labor. These agencies then attempted to match SSNs with unemployment insurance (UI) wage records (which are based on SSNs) to acquire the necessary data for WIA as well as non-WIA participants. Training providers said that providing SSN information to states might be efficient because states are required to use UI data in assessing their own performance under WIA and would be able to incorporate the training provider outcome data in their ongoing data analysis efforts. Moreover, because states are required by WIA to verify the data provided by training providers, having access to SSNs would facilitate that process. However, training providers highlighted limitations in the UI data that needed to be addressed through additional data collection. For example, the UI data do not include federal employees, military personnel, farm workers, the incarcerated, the self- employed, and those employed out-of-state. Moreover, there is a significant time lag in the availability of the data. Training providers also highlighted privacy concerns regarding the provision of SSNs to state agencies. They said the Family Educational Rights and Privacy Act (FERPA) generally prohibits an educational institution from disclosing personally identifiable information (such as an SSN) from individual student records without prior written consent from the student unless the disclosure meets one of a number of exceptions envisioned by the law and implementing regulations (such as provision of the information to Education). In January 2001, Labor and Education issued joint guidance stating that certain exceptions that could allow educational institutions to disclose this information without a student’s prior consent were applicable to the WIA data collection reporting requirements. However, confusion and inconsistency continues within both federal and state Education departments as to the use of this exception. There is also confusion about the consequences of utilizing this exception, with some state-level education officials believing that a student could take them to court, alleging that disclosure, without the student’s consent, violates FERPA or similar state-level privacy laws. While several courts have held that there is no private right of action under FERPA, there have been cases where individuals have alleged that violations of FERPA are violations of their civil rights. According to one Education official, a court recently awarded a student $450,000 for the unauthorized disclosure of information from the student’s records by an educational institution. When training providers were unwilling, or believed that they were unable, to provide SSNs to the state, they used other methods to gather the information, which they said were even more resource-intensive. For example, in two of the states we visited, training providers told us that they planned to obtain the required information by calling all students who attended the course. This plan required them not only to track where the students were located, but expend significant resources to call a sufficient number of them to acquire a representative sample. They said they did not have the staff available to collect data in this manner. Third, training providers reported that WIA’s data collection and reporting requirements are similar, but not exactly the same, as those of other programs, posing an additional data collection burden on providers. This is especially true for Education’s Perkins program, which generally allows state discretion as to what and how outcome data will be collected. For example, in Texas, Perkins and WIA have different definitions for a program completer. The state defined completion for most WIA-eligible training programs as receiving a 9-hour credit certificate, for example, enough training to get a job. For Perkins, however, the state set its lowest completion point as receiving a 15-hour credit certificate from an array of state-approved courses, for example, courses that would lead to student attainment of a state-established skill proficiency. Moreover, WIA’s data collection requirements often differ in scope from other programs. For example, Adult Education and Literacy providers must develop outcome information for all students enrolled in adult education and literacy programs. WIA requires outcome information for students in different groupings, for example, only in particular courses. Training providers did not necessarily see the data they are required to collect by WIA as accurate and useful for assessing their performance. This perception made them less willing to take on this data collection burden. For example, several community colleges told us that WIA’s measure of program completer fails to reflect how a community college serves individuals. For example, a student may leave a course midway through the class because the student had acquired the necessary skills, or had obtained employment. Thus, the community college may have met the needs of the individual, even though the individual did not necessarily complete the course. As a result of these concerns, training providers are withdrawing their participation from the WIA system, especially because they have access to the same populations of students through other programs, such as Welfare-to-Work, whose data collection requirements may be less burdensome. In fact, we found that the number of providers and course offerings on the ETPL has decreased in many locations. For example, between July 2000 and July 2001, Vermont’s list decreased from offering 600 programs by 80 providers to offering 158 programs by 46 providers. In some locations, state agencies and training providers are trying to work together to overcome some of these concerns. For example, in California, community colleges in one county have chosen to classify WIA-funded training participants as being enrolled in a separate college. Only the name of this college, and not the name of the community college where the classes were actually held, has been placed on the ETPL, easing the burden on providers who previously had to collect data on non-WIA students as well. In addition, WIA allows local boards to accept certain other program-specific performance information for the purposes of fulfilling WIA’s eligibility requirements, if the information is “substantially similar” to what WIA requires. In this regard, California’s education community received approval from the state workforce investment board to use Perkins’ outcome data as substantially similar measures until the state is able to fully implement other outcome data measures. In addition, at least one state was able to address concerns about privacy protections under FERPA because the agency receiving and analyzing the data was located within the state Department of Education. Labor has established a workgroup—its adult and dislocated worker workgroup—to address many of the issues that training providers described as burdensome. Labor’s goal is to craft solutions that do not penalize states already collecting the data successfully. However, the workgroup has no deadline for completion and does not include all the key players. For example, the workgroup does not include training provider representatives, although Labor officials said they invited an association representing community colleges to meetings. However, the lack of formal membership of these key players may limit the value of any solutions developed or the willingness of training providers to adopt those solutions. Moreover, the workgroup has not yet provided guidance, such as products and materials on subsequent eligibility and consumer report requirements. Some state and local implementers we spoke to felt that the continued confusion surrounding the provision of SSNs to noneducational entities needed to be resolved at the state or federal level through a mechanism stronger than guidance, such as through an amendment to FERPA itself. Training providers have said that the data collection requirements are even more burdensome given that they have received few job seekers for training since WIA was implemented in their states. According to regional Labor officials and several of the national associations we interviewed, training providers are receiving relatively few training referrals under WIA. For the nine one-stops we visited, training providers had been sent, on average, only six individuals with ITAs since July 2000. Moreover, officials from a local area encompassing nine counties told us that their two one- stops had provided no ITAs to individuals until March 2001, and had sent a total of 11 individuals to training offered by four of its eligible providers between March and July 2001. In addition, in some of the local areas we visited, there were financial limitations on the amounts of the ITAs, which did not necessarily cover the cost of some of the course offerings on the ETPL. Therefore, not all classes on the ETPL were available to some WIA- funded individuals. In addition, training providers are not always able to recoup the costs they are expending to collect and report the required data unless they build this extra cost into the cost of training. There are a variety of reasons why the number of job seekers who have been sent to training is low. These reasons were identified by the state and local implementers as well as several national associations we interviewed, and by the respondents to the surveys conducted for us. First, local areas have generally adopted a “work-first” approach to implementing WIA, encouraging job seekers to try to obtain employment without training. In that respect, local areas have set a level for what constitutes a “sustainable wage” (the minimum wage level at which a job is considered to provide for self-sufficiency and qualify as an acceptable placement for a job seeker) that allows them greater flexibility in placing an individual in a job without training. We also found that many local areas required job seekers to perform a number of activities before they were able to qualify for training. For example, job seekers were often required to spend a certain amount of time looking for a job, or go on a certain number of interviews before they could be approved to receive training with WIA funds. This may have also reduced the number of individuals who received training. Second, some state and local implementers said that, given the strong economy, employers were more interested in hiring workers than waiting for them to complete training classes. Third, according to local implementers, the Adult and Dislocated Worker programs have had little money left over for training because they, along with the Employment Service, have had to consistently bear a greater share of the costs associated with establishing and maintaining the one-stop, as well as providing core and intensive services to job seekers. Moreover, WIA required using alternative funding sources, such as Pell grants (a form of federal financial aid available to students), to leverage their training dollars, but state and local implementers were uncertain whether they could do this. Finally, according to the state and local implementers we interviewed and the national associations representing them, the establishment of performance measures for adult and dislocated workers may be discouraging one-stops from placing individuals into training. Due to the fact that incentives and financial sanctions, such as a loss of program funds, are now linked to performance on a series of measures (for example, employment entry, earnings gain, or job retention), one- stops may be hesitant to send individuals to training who, in the minds of one-stop administrators, are not likely to complete training and receive a job that meets performance measures. This particularly affects certain types of individuals, such as incumbent workers whose wage gain may not meet performance levels, or hard-to-serve individuals who may be diverted to other partners’ programs for training or placement. As a result of their experiences, state and local implementers have developed a number of ideas for actions that they believe could address the concerns raised by training providers and other state and local implementers (as shown in table 5). Although there was broad consensus among those we contacted that these concerns needed to be addressed, there was not consensus on which ideas had greater potential to address these concerns, nor which ones would best maintain the flexibility that was key to WIA’s implementation. Some of the ideas included actions that could be taken on the local level, such as the suggestion that one-stops increase their use of customized and on-the-job training in partnership with training providers. Others would require regulatory or legislative action, such as giving training providers additional funds to offset the cost of data collection or amending FERPA to allow for the use of SSNs to satisfy WIA’s data collection requirements. Private-sector representatives we spoke with are frustrated with the operations of the workforce investment boards under WIA, believing that the boards are too large to effectively address their concerns, and that board-related entities created to help deal with the size of the boards may not reflect employer views. Labor’s guidance in this area has not specifically addressed these issues. Although some private-sector representatives still appear to be making efforts to meet WIA’s requirement of private-sector leadership, they told us that, if their concerns are not addressed, they may decide to decrease their involvement or stop participating. This could limit the ability of the boards to develop and establish the strong links with the business community needed to develop workforce development strategies that effectively address the needs of all individuals. Based on the results of surveys and reports of national associations representing workforce investment boards, and according to the majority of private-sector employers and other state and local implementers we interviewed, the large number of members on boards has made it very difficult to conduct operations efficiently. For example, according to a national board association, the average number of members on workforce boards exceeds 40 in most of the places where new boards have been established since the passage of WIA. In our work, we found that Vermont had over 40 seats on its state board, California had 64, and Pennsylvania had 33. Local boards can be just as large. For example, we found one in Pennsylvania with 43 members and two in California with 45 members. The size of these boards is especially large in comparison to various private-sector corporate boards. For example, General Motors’ board of directors has 13 members, while Intel’s board has 11. We were told that the size of the boards makes it difficult to recruit the necessary private-sector board members for several reasons. First, because private-sector representatives must make up the majority of board membership, the larger the board, the greater the requirement for private-sector members, which increases the difficulty of recruiting the requisite number of private-sector members. We found several boards that had been unable to achieve the private-sector majority required by WIA. For example, Vermont’s state board had about 42 percent private-sector membership, although the state is working to fill additional private-sector vacancies. Pennsylvania and California used private nonprofit institutions to achieve their private-sector majorities. Labor’s survey of 132 local areas found that local areas were more successful recruiting private-sector representatives who had retired than those who were still working, which may limit the current knowledge of workforce issues brought by the private-sector to the board. Second, the large number of board members makes it difficult to set up meetings. For example, officials in one local workforce investment area said they attempted to meet quarterly to accommodate the schedules of the various members. However, because members often are dispersed throughout the state, it may be difficult to handle the logistics for so many participants, or to find locations for the board meetings that are convenient to all members and do not pose transportation obstacles. If members are unable to attend the meetings, boards may not be able to achieve a quorum (usually a simple majority), and therefore may be unable to make decisions. Third, the large number of board members makes it difficult to run meetings efficiently. It may be difficult to ensure that the numerous board members all have the same information prior to the meeting, and to keep members apprised of the board’s activities. In addition, it is difficult to reach agreement on important issues because having more members results in having more opinions that need to be addressed and reconciled. These difficulties have been especially prevalent this past year when boards have had to perform many administrative tasks, such as developing strategic plans or certifying one-stops, in order to set up the WIA system. Private-sector representatives and other implementers in the three states we visited said that the boards did not operate in an efficient manner. This inefficiency led to meetings that focused on administration and process rather than on outcomes and broad strategic goals, both of which the private-sector representatives see as an appropriate role for a board of directors. Some board members and association representatives indicated that it would be easier to deal with the large size of the board if they could meet in smaller groups outside of the formal board meetings to discuss important issues. At the same time, WIA’s requirement that boards make available to the public, information regarding their activities through open meetings may preclude such action. State and local implementers in one state told us that they believe WIA’s sunshine provision prohibits decisions from being made in private, and has prevented board members from meeting in smaller groups to discuss issues. In one state we visited, employers told us that a required 72-hour public comment period for any agenda item precludes board members from putting on the agenda any important items that might have come up at the last moment. Despite these difficulties, we found several local areas making efforts to address the problems associated with large boards. For example, some local areas have divided their boards into smaller committees focusing on specific issues, thus increasing member participation and creating a more manageable governance structure. As the next section shows, however, the downside of this approach is the potential dilution of private-sector influence if private-sector board members are not included as members of the committees. To make a state board smaller, more manageable, and more efficient, one state board chair said he hopes to remove, but not replace, board members who fail to take their participation seriously. Labor has contracted with organizations, offered training sessions, and developed publications that provide information on how boards should operate. For example, it has contracted with a coalition of 20 private- sector organizations to produce publications and guides on WIA. However, it has not provided guidance specifically on ways to ensure that boards maintain private sector leadership. It has also recently formed a workforce investment board workgroup, one of six workgroups formed since its implementation status survey, to consider these issues. According to our interviews with private-sector representatives and private-sector information from national associations, additional structures that have been developed to accomplish many of the day-to-day board activities may not reflect or may dilute employer’s input into the system. Virtually every state and local board has assigned staff that is responsible for carrying out much of the detail associated with the board operations, such as setting up meetings, developing the agenda, and ensuring that boards stay current with compliance issues. Private-sector representatives were concerned, however, that the staff may lack knowledge of or interest in the needs of the private sector. According to private-sector representatives and other implementers, staff are often employed by the public-sector agency responsible for carrying out WIA’s Adult, Dislocated Worker and other mandatory partners’ programs in each state, which in most cases is a labor or human services agency. As a result, private-sector and other representatives expressed concerns regarding how staff can carry out their primary focus of serving the board when they report to supervisors in their respective agencies. In that respect, we were told that staff sometimes dismissed issues that private-sector representatives tried to raise because the issues were not deemed important by the state agency. In two states, private-sector and other representatives also complained that staff failed to provide them with key information for the board meetings early enough to allow them to prepare, leaving them unable to participate at the board meetings to the same extent as public officials. Private-sector representatives also questioned whether the existing public-sector staff have sufficient understanding of the environment in which business representatives operate. Finally, although staff generally offer extensive expertise of working with job training programs, staff experienced in prior workforce systems may be hesitant to embrace WIA’s vision of a more private-sector- driven and strategic system. Labor has provided little guidance or information in this area, but there are some locations that appear to have hired staff that adequately represents the private sector. For example, in a local area in California, the WIA funds have been provided to the Office of Economic Development, from which the staff originate, to ensure that the board staff have a private-sector focus. In a local area in Pennsylvania, staff is employed by an incorporated board, which gives the staff greater independence from the state public agencies. To address many of the difficulties stemming from the large size of the boards, many states and localities have established committees under the auspices of the board. Committees are generally established to address particular topics, such as youth activities or performance measures, with the goal that the committees will research the issues and decide upon a particular course of action for the board to take. However, according to our interviews with private-sector representatives and survey results, the establishment of committees to address particular topics of interest for the board could serve to dilute private-sector input into key decisions. There is no requirement that the private-sector members chair these committees or even be included on them. WIA is silent on the establishment of the committees and the form that they should take, but some private-sector representatives told us that, given the important role these committees play in influencing board activities, they felt alienated when they were underrepresented or not represented on the committees. In all of the states we visited, we found that committees at both the state and local level had little private-sector membership. Figure 1 shows that only one of the state board committees, each labeled with their specific committee name, had more than 50 percent private-sector membership. In the states we visited, we also found that there were public-sector committee members who were not board members. According to private- sector representatives in one state, this membership problem further decreases private-sector input in the system. At the same time, however, ensuring private-sector involvement on these committees is problematic, since private-sector employers serve on the boards as volunteers in addition to their regular responsibilities, with time constraints often precluding them from attending both board and committee meetings. Labor has provided technical assistance to state and local boards, and has arranged peer assistance and provided information on promising practices to help local boards deal with some of these challenges. However, information is still lacking on how to balance the requirements of the board operations with the needs of the private sector. Despite this, some locations appear to be making progress in ensuring private-sector input to committees. For example, some local areas in California are requiring committees to have a business majority and define a quorum in terms of the business majority. As a result of their experiences, state and local implementers have developed a number of ideas for actions that they believe could enhance the role of the private-sector on workforce investment boards (as shown in table 6). Although there was broad consensus among those we contacted that these concerns needed to be addressed, there was not consensus on which ideas had greater potential to address these concerns, nor which ones would best maintain the flexibility that was key to WIA’s implementation. Some of the ideas focused on those actions that could be taken at the local level, such as clearly delineating the responsibilities of staff members to ensure a private-sector focus. Others may involve legislative or regulatory action, such as giving responsibility for WIA programs to public-sector entities (for example, economic development agencies) or nonprofit entities that reflect employer outlook, or limiting authority of public-sector staff. In addition, some state and local implementers suggested mandating a maximum number of staff members and providing financial incentives to business members to take over the tasks currently performed by the staff. The workforce development system WIA sought to create represents a sea change for workforce development, not only because it attempted to significantly change how employment and training services are provided, but also because it provided significant latitude to those implementing WIA at the state and local level. Given the early stage of this process, and the new and additional partners involved in the process, it is not surprising that implementation has been affected by concerns over the new requirements. Unless these concerns are addressed in some fashion, there is a risk that the flexibility provided to states and local areas under WIA, instead of fostering innovation, will continue to lead to confusion, unnecessary burden, and resistance to change. Moreover, although states and localities will continue to participate as required by WIA, the vision for one-stops—full integration—may not be achieved. In effect, complying with WIA could result in additional requirements rather than the replacement of traditional service-delivery structures. The opportunity for the federal government to foster fundamental change in the workforce development system of the future could be lost. While state and local implementers agreed that these concerns needed to be addressed, there was no consensus on a single course of agency or congressional action that would be most effective in addressing these concerns. Moreover, some of the concerns may stem from confusion about what states and localities can already do to embrace WIA’s requirements. As a result, states and localities need more time to fully understand and embrace these new ways of operating in conjunction with appropriate guidance and technical assistance. Guidance from all responsible agencies can go a long way towards addressing concerns; it will also help identify issues that may require action beyond guidance. First, the vision of a seamless system of employment and training services depends upon states and localities having better information about the benefits of integrating their services at one-stops. Second, states and localities need better information on cost-effective methods for training provider data collection and reporting. They need tools to address the burden associated with conflicting program requirements and clarification about the confusion surrounding the allowed use of SSNs under FERPA and related policy guidance to meet data collection requirements. Also, training providers need another year of initial eligibility exempt from the data collection requirements while they work with state and local implementers to explore ways to resolve data collection difficulties. Until these issues are resolved, dropping training providers from consideration or having them withdraw their services when the initial eligibility period ends would be at odds with WIA’s goal of providing job seekers with better training options. Third, unless action is taken to ensure that the states and localities understand and can implement ways to achieve effective workforce investment board operations consistent with private-sector needs, WIA’s requirement of private-sector leadership for this new workforce system may be at risk. Moreover, the private sector has the necessary labor market knowledge to create a strategic workforce investment system, without which the new system may be adversely affected. To facilitate the implementation of WIA, as well as to help state and local implementers move closer to the vision of a fully integrated system, we recommend that the Secretary of Labor, along with the Secretaries of Education, HHS, and HUD, jointly explore the specific programmatic and financial concerns identified by state and local implementers that affect their ability to fully integrate their services at the one-stops, and identify specific ways in which these concerns can be overcome. To help ensure that there is a sufficient quality and quantity of training programs and providers available for individuals, we recommend that the Secretary of Labor, along with the Secretary of Education Disseminate best-practice information on cost-effective methods being used by states and localities to collect and report the required training provider data; Address confusion arising from dual reporting for WIA requirements and those for other education programs; and Establish a unified federal position on whether SSNs can be provided by training providers to state agencies (such as departments of labor) for the purposes of meeting WIA’s data collection requirements, if it is determined that the most cost-effective data collection methods require the use of SSNs. To help maintain private-sector leadership in the system, the Secretary of Labor should disseminate information on successful practices by states and local areas to ensure effective board operations and the effective operations of their affiliated entities consistent with strong private-sector leadership. To ensure that training providers are not unnecessarily withdrawing their course offerings, the Congress may wish to allow training providers to remain on the list of eligible providers for another year without meeting all the data collection requirements while they work with state and local implementers to explore ways to resolve data collection difficulties. We provided a draft of this report to Labor, Education, HHS and HUD for review and comment. The comments from the agencies are reproduced in appendixes II through V, respectively. Labor appreciated our work in identifying issues and problems associated with WIA implementation, and Education said that the report and recommendations provided insight on ways it can help state and local implementers. HHS, which is responsible for one of the mandatory partner programs, concurred with the recommendation that the respective Secretaries jointly explore the specific programmatic barriers affecting programs’ ability to achieve the vision of full integration. Neither Labor, Education, nor HUD responded directly to any of our recommendations. The majority of the comments made by Labor, Education, and HUD reiterated the difficulties associated with WIA implementation. Labor said that the specific issues we identified in the report must be considered in the broader context of the massive reform of the workforce development system anticipated by this landmark legislation. We believe that our report highlights the difficulties that states and localities are having implementing many of these new, complicated requirements and discusses those issues that need to be addressed to ensure successful implementation. According to Labor, integrating the many partners into one system is a challenging task, and it has no authority to direct or mandate participation of others, nor can it deliver guidance that must come from other partners. For WIA to succeed, partnership among agencies at the federal level is key, which is why we recommended that the respective Secretaries work together jointly to address limitations to participation. Education said it was concerned that our report would set a benchmark for measuring the success of WIA against the vision of full integration, rather than the coordination that was required by the law. We did not intend to imply that full integration is the only option for participation. However, because Labor highlighted full integration as its ultimate vision, our report sought to identify those issues that would serve as impediments to achieving full integration. If policymakers want full integration to be a viable option, the issues we highlighted in our report—and reiterated by Education in its comments—need to be considered and addressed. Education also highlighted the concerns we raised in our report concerning privacy protections under FERPA, saying that the protection under FERPA cannot be ignored or sacrificed when faced with the separate, independent challenge of meeting the accountability requirements of WIA. This comment supports our recommendation that Education and Labor work together to establish a unified federal position on what is allowed under FERPA for purposes of WIA. HUD’s comments focused on its viability as a partner in the one-stops. Although HUD noted that it is participating in interagency workgroups and has provided guidance, it said that WIA did not directly apply to the majority of HUD’s programs, pointing out that HUD’s programs differ significantly from those of Labor and Education. It also suggested that none of its workforce development initiatives have a primary mission of employment and training. HUD’s response reiterates the need for it to work to resolve the programmatic limitations that affect the ability of its programs from participating in the one-stop system. Labor said our report did not fully reflect the unprecedented level of guidance and technical assistance that it and its federal and state partners have provided to state and local implementers since the passage of WIA. Throughout the report, we clarified this point and provided more examples of such guidance. However, much of the guidance that Labor has issued to date has focused on helping state and local implementers set up the system. State and local implementers now need guidance that addresses concerns specific to a system that is in the critical early stages of operation, such as how to effectively collect performance data and operate boards. Both Education and Labor highlighted the importance of state and local flexibility for WIA implementation. Labor said that our report needs to more explicitly acknowledge this flexibility, and that the differences we observed among various one-stop systems reflect decisions based on state and local circumstances to achieve state and locally established goals. We believe our report fully acknowledges that WIA did not prescribe how states and locals would implement WIA. We did note, however, that flexibility without guidance or implementation assistance can sometimes lead to confusion. Education and Labor both believed that detailed guidance was not compatible with the flexibility WIA affords states and localities. However, we believe that guidance can be detailed without being prescriptive, and that federal partners play a vital role in helping state and local implementers optimize the flexibility provided by WIA. In addition to these comments, each of the agencies provided technical comments that we incorporated, where appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this report. At that time, we will send copies to the Secretary of Labor, the Secretary of Education, the Secretary of Housing and Urban Development, the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other contacts and staff acknowledgements are listed in appendix VI. Table 7 shows the range of methods used by partners to meet the requirement of core service provision through the one-stops at each of the nine locations we visited. Natalya Bolshun also made significant contributions to this report, in all aspects of the work throughout the review. In addition, Dianne Murphy Blank, Andrea Sykes, and Andrew Von Ah, aided in the gathering and analyses of information collected on our site visits, Jessica Botsford and Richard Burkard, provided legal support, and Patrick DiBattista assisted in report and message development. Department of Labor: Status of Achieving Key Outcomes and Addressing Major Management Challenges (GAO-01-779, June 15, 2001). Major Management Challenges and Program Risks, Department of Labor (GAO-01-251, Jan. 2001). Multiple Employment Training Programs: Overlapping Programs Indicate Need for Closer Examination of Structure (GAO-01-71, Oct. 13, 2000). Workforce Investment Act: Implementation Status and the Integration of TANF Services (GAO/T-HEHS-00-145, June 29, 2000). Multiple Employment Training Programs: Major Overhaul Needed to Create a More Efficient, Customer-Driven System (GAO/T-HEHS-95-70, Feb. 6, 1995). Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results (GAO/T- HEHS-95-53, Jan. 10, 1995). Multiple Employment Training Programs: Overlap Among Programs Raises Questions About Efficiency (GAO/HEHS-94-193, July 11, 1994). Multiple Employment Training Programs: Overlapping Programs Can Add Unnecessary Administrative Costs (GAO/HEHS-94-80, Jan. 28, 1994). | A competitive national economy depends on providing individuals with marketable skills and employers with access to qualified workers. In the past, the nation's job training system was fragmented and did not serve job seekers or employers well. The Workforce Investment Act in 1998 created a system that links employment, education, and training services to better match workers and labor market trends. The act represented a significant change from earlier workforce development efforts. Many of the act's provisions took effect in July 2000, and state and local organizations are at different stages of implementing them. Although the act's mandatory partners are making efforts to participate in the one-stops, programmatic or financial concerns are affecting the partners' level of participation as well as their ability to fully integrate their services at the one-stop. As implementation of the act progresses, training options for job seekers may be diminishing rather than improving, as trained providers reduce the number of courses offered to job seekers. Private-sector representatives may be discouraged from participating on workforce investment boards as a result of how states and localities are operating their boards and associated entities. |
The Bureau of Land Management (BLM) and the Forest Service manage most of the nation's 655 million acres of federal land. BLM is responsible for about 264 million acres of public lands, managed by 12 state offices that are responsible for supervising the operations of 175 field offices nationwide. The Forest Service is responsible for about 192 million acres of public lands, managed by 9 regional offices that are responsible for supervising the operations of 155 national forests. BLM and the Forest Service manage about 93 percent of the 44 million acres of federally owned land in Oregon and Washington. BLM's Oregon State Office manages about 17 million acres of land in the two states, including over 28,000 miles of roads. The state office directs the operations of 10 district offices—9 in Oregon and 1 in Washington—-each responsible for managing BLM's public land resources within its geographic jurisdiction. Six of the Oregon districts contain Oregon and California Grant Lands, distributed in a checkerboard pattern within each district, and interspersed within and around the federal lands is state and private lands. The Forest Service's Region 6 manages about 25 million acres of land in the two states, including nearly 94,000 miles of roads. Region 6 directs the operations of 19 national forests—13 in Oregon and 6 in Washington. BLM's district offices and the Forest Service's national forest offices perform similar land management functions, including restoration of fish and wildlife habitat and designing, constructing, and maintaining roads. BLM and Forest Service land management activities regarding fish habitat in Oregon and Washington are governed by three regional agreements: the Northwest Forest Plan, signed in 1994 for activities on the west side of the Cascade mountain range, and PACFISH and INFISH, signed in 1995, for activities on the east side of the range. Both agencies are required to direct their land management activities toward achieving the objectives of the three agreements. The Northwest Forest Plan's Aquatic Conservation Strategy includes the objective of maintaining and restoring "connectivity within and between watersheds," which must provide "unobstructed routes to areas critical for fulfilling the life history requirements" of aquatic species. In addition, the Northwest Forest Plan's road management guidelines state that the agencies shall "provide and maintain fish passage at all road crossings of existing and potential fish-bearing streams." PACFISH includes the objective of achieving "a high level of habitat diversity and complexity…to meet the life-history requirements of the anadromous fish community inhabiting a watershed." The PACFISH road management guidelines duplicate the Northwest Forest Plan guidance. INFISH provides similar management objectives and guidance for resident native fish outside of anadromous fish habitat. Maintaining fish passage and habitat is particularly important for anadromous fish, which as juveniles migrate up and down stream channels seasonally, then travel from their freshwater spawning grounds to the ocean where they mature, and finally return to their spawning grounds to complete their life cycle. Under the authority of the Endangered Species Act, the National Marine Fisheries Service currently lists four species of salmon—including Coho, Chinook, Chum, and Sockeye—as well as steelhead and sea-run trout as either threatened or endangered anadromous fish in the northwest region. According to agency officials, BLM and Forest Service lands in Oregon and Washington include watersheds that represent some of the best remaining habitat for salmon and other aquatic life, often serving as refuge areas for the recovery of listed species. As such, unobstructed passage into and within these watersheds is critical. Culverts—-generally pipes or arches made of concrete or metal—-are commonly used by BLM and the Forest Service to permit water to flow beneath roads where they cross streams, thereby preventing road erosion and allowing the water to follow its natural course. Culverts come in a variety of shapes and sizes, designed to fit the circumstances at each stream crossing, such as the width of the stream or the slope of the terrain. Historically, agency engineers designed culverts for water drainage and passage of adult fish. However, as a culvert ages, the pipe itself and conditions at the inlet and outlet can degrade such that even strong swimming adult fish cannot pass through the culvert. The agencies remove, repair, or replace culverts to restore fish passage, as shown in figure 1. To meet the objectives of the Northwest Forest Plan and PACFISH, as well as Oregon and Washington state standards, current culvert repair or replacement efforts must result in a culvert that allows the passage of all life stages of fish, from juvenile to adult. As of August 1, 2001, the agencies' fish passage assessments identified almost 2,600 barrier culverts—over 400 on BLM lands and nearly 2,200 on Forest Service lands—-and agency officials estimate that, in total, up to 5,500 fish barrier culverts may exist. BLM's 10 district offices are collecting culvert information as part of their ongoing watershed analysis activities and have not established a date for completing all culvert assessments. The Forest Service, using a regionwide fish passage assessment protocol, plans to complete data collection for all of its 19 forests by the end of calendar year 2001. The culvert information the agencies are collecting will help them coordinate and prioritize culvert repair, replacement, and removal efforts. Based on their current knowledge of culvert conditions, the agencies project that to restore fish passage at all barrier culverts could cost over $375 million and take decades to finish. BLM's district offices are assessing fish passage through culverts as part of the ongoing land management activity of a watershed analysis. A watershed analysis—-a systematic procedure to characterize the aquatic (in-stream), riparian (near stream) and terrestrial (remaining land area) features within a watershed—- is a requirement of the Northwest Forest Plan and provides the foundation for implementing stream and river enhancement projects, timber sales, and road building and decommissioning projects. According to an agency official, the extent to which a watershed analysis has been completed varies by district. The five western Oregon districts entirely within the Northwest Forest Plan's jurisdiction, which contain 98 percent of BLM's culverts on fish-bearing streams, have completed watershed analyses for 87 to 100 percent of their lands. The range for the remaining five districts is 0 to 18 percent. Each BLM district office maintains its own records regarding barrier culverts on its lands. As of August 1, 2001, BLM's district offices had assessed 1,152 culverts for fish passage and identified 414 barrier culverts. BLM plans to continue its ongoing watershed analysis process, and estimates, based on assessments to date, that an additional 282 barrier culverts may be identified, for a total of 696 culverts blocking fish passage. The Forest Service initiated a regionwide assessment of culverts on fish-bearing streams in fiscal year 1999 to determine the scope of fish passage problems and to create a database of culvert information that will allow it to prioritize projects to address barrier culverts on a regionwide basis. The region first developed written guidance and provided implementation training to staff at each forest office. In fiscal year 2000, 13 of the 19 forests conducted the assessments and reported the results to the region's fish passage assessment database. In fiscal year 2001, the remaining six forest offices initiated their assessments and follow-up and verification of the first year's results is ongoing. As of August 2001, the forest offices had assessed 2,986 culverts for fish passage and identified 2,160—or about 72 percent—as barrier culverts. The region plans to complete its assessment by December 2001, and based on its findings thus far, estimates that an additional 2,645 barrier culverts may be identified, for a total of 4,805 culverts blocking fish passage. On the basis of information collected as of August 1, 2001, the two agencies estimate a total of 10,215 culverts on fish-bearing streams under their jurisdictions—2,822 culverts on BLM lands and 7,393 culverts on Forest Service lands—as shown in figure 2. Detailed information on district and forest office culvert assessment efforts is provided in appendix I. Additional ground work is necessary before both agencies have complete information on the extent of barrier culverts on their Oregon and Washington lands, and as such, neither agency has established a process for prioritizing passage restoration projects on a regionwide basis. However, the agencies are using the fish passage information they have collected to help them coordinate and prioritize culvert repair, replacement, and removal efforts on a more limited scale. For example, officials at BLM's Coos Bay district stated that through the ongoing culvert assessment process, they annually reprioritize culvert projects for each resource area within the district and for each watershed within each resource area, thus ensuring that the most critical barriers are addressed first. In addition, according to BLM state office officials, some culverts identified by district offices as fish passage barriers are included in their deferred maintenance and capital improvement project backlog and evaluated for funding among other road and facility projects. State office officials stated that while culvert passage restoration projects have not ranked high due to the critical nature of other backlog projects, they expect barrier culvert projects to move up the list for funding as the backlog is reduced. National forest offices use their culvert fish passage assessment information to assist them in prioritizing culvert maintenance activities and for broader road management planning purposes. For example, in fiscal year 2001, regional officials directed each forest office to identify its top five culvert passage restoration projects when submitting its final assessment report. The region considered these projects for funding; however, according to a regional office official, it is not known how many of these projects were actually completed. In addition, Olympic National Forest officials stated that they have developed a draft road management strategy that uses the fish passage assessment results as input to assist them in further prioritizing of road projects identified by the strategy. Although BLM and the Forest Service are currently addressing barrier culverts based on the assessment information they have collected, agency officials estimate, based on their results to date, that it may cost over $375 million and take decades to restore fish passage at all barrier culverts. BLM officials estimate a total cost of approximately $46 million to eliminate their backlog of about 700 barrier culverts, while Forest Service officials estimate a total cost of about $331 million to eliminate their backlog of approximately 4,800 barrier culverts. At the current rate of replacement, BLM officials estimate that it will take 25 years to restore fish passage through all barrier culverts, and Forest Service officials estimate that they will need more than 100 years to eliminate all barrier culverts. Furthermore, these estimates do not reflect any growth in the backlog due to future deterioration of culverts that currently function properly. According to BLM and Forest Service officials, several factors restrict their ability to quickly address the long list of problem culverts. Of most significance, the agencies assign a relatively low priority to such culvert projects when allocating road maintenance funds because ensuring road safety is the top priority for road maintenance, repair, and construction funds. Both agencies emphasize reducing the backlog of road maintenance rather than specifically correcting barrier culverts. Because neither agency requests funds specifically for barrier culvert projects, district and forest offices must fund these restoration projects within their existing budgets, and these projects must compete with other road maintenance projects for the limited funds. Therefore, to restore fish passage, the agencies largely rely on other internal or external funding sources not dedicated to barrier removal nor guaranteed to be available from year-to-year. Other factors affecting the agencies' efforts to restore fish passage include the complex and lengthy federal and state project approval process to obtain environmental clearances and the limited number of agency engineers experienced in designing culverts that meet current fish passage requirements. Furthermore, to minimize disturbance to fish and wildlife habitat, states impose a short seasonal "window of opportunity" within which restoration work on barrier culverts can occur. As a result, each barrier removal project generally takes 1 to 2 years from start to finish. Both BLM and the Forest Service regard culverts as a component of their road system—similar to bridges, railings, signs, and gates—each requiring maintenance, including repair, replacement, and removal to ensure safe operation. As such, each agency requests funding for road maintenance as a total program of work rather than requesting funding specifically for culvert maintenance, or more specifically, to restore fish passage at barrier culverts. Furthermore, according to agency guidance, ensuring road safety is the top priority for road maintenance activities rather than removing barrier culverts. Individual forest and district offices must fund culvert projects within their road maintenance allocations, compete with other units for deferred maintenance funds, or use other funding sources. BLM's state office and the Forest Service's regional office each allocate annual road maintenance funds to districts and forests primarily based on the miles of roads each contains and distribute additional funds to those units for maintenance projects on a competitive basis. BLM's fiscal year 2001 annual road maintenance funding totaled about $6 million, while according to officials, about $32 million is required to meet annual maintenance needs, including culverts. The Forest Service's fiscal year 2001 annual road maintenance funding totaled about $32 million, while according to officials, about $129 million is required to meet their annual maintenance needs, including culverts. Due to their large backlogs of deferred maintenance, officials of both agencies stated that deferred maintenance funds have not been distributed to district or forest offices for fish passage restoration projects. In the absence of sufficient road maintenance funding, the district and forest offices largely rely on other internal or external funding sources not specifically dedicated to barrier removal nor guaranteed to be available from year-to-year to restore anadromous fish passage at barrier culverts. As shown in figure 3, BLM's district offices reported that since fiscal year 1998, they relied almost entirely on Jobs-In-The-Woods program funding, which seeks to support displaced timber industry workers within BLM's Oregon and California Grant Lands. BLM distributes this funding to the western districts in Oregon containing the Oregon and California Grant Lands to fund contracts with local workers to do stream restoration projects, including barrier culvert repair and replacement. While BLM officials view the Jobs-In-The-Woods program as an ongoing source of funding for culvert projects, this funding source is not dedicated to barrier removal and BLM may use these funds for a variety of other resource programs or projects. Other BLM barrier culvert project funding sources include timber sales and the Federal Highway Administration's Emergency Relief for Federally-owned Roads to replace storm-damaged culverts. As shown in figure 4, national forest offices reported that since fiscal year 1998 they have primarily relied on the Federal Highway Administration's funding and the National Forest Roads and Trails funds for projects to restore anadromous fish passage at barrier culverts. Due to severe flooding in recent years and widespread damage to culverts, forest offices obtained Federal Highway Administration funds to replace damaged culverts and concurrently ensure these culverts meet current fish passage standards. While such funds enabled the forest offices to address barrier culverts, the forest offices cannot rely on future flood events to ensure a steady stream of funding for such projects. National Forest Roads and Trails funds consist of 10 percent of the receipts of the national forests made available to supplement annual appropriations for road and trail construction and projects that improve forest health conditions. Forest offices used these funds to restore fish passage at barrier culverts and to fund their ongoing culvert fish passage assessment effort. These funds, however, are not dedicated to fish passage projects, but rather culvert projects compete with other road projects for these funds on a regionwide basis. Other funding sources for Forest Service fish passage projects include Jobs-In-The-Woods and timber sales. In addition to limitations on the amount of funding available for barrier culvert projects and uncertainty regarding the continuity of such funding, three other factors affect the agencies' efforts to restore fish passage. These factors are (1) the complex and lengthy federal and state project approval process, (2) the limited number of agency engineers with experience designing culverts that meet current fish passage standards, and (3) the short seasonal "window of opportunity" during which work on barrier culverts can occur. Each of these factors affects the time frame needed to complete each of the major phases of a barrier culvert project— specifically, obtaining necessary permits and clearances, designing the culvert, and constructing the culvert—and consequently impacts the number of projects that can be completed annually. Due to these factors, culvert projects to restore culvert fish passage take 1 to 2 years to complete, according to BLM and Forest Service officials. First, BLM and Forest Service officials stated that the number of fish passage projects the agencies can undertake and the speed with which they can be completed depend largely on how long it takes to obtain the various federal and state clearances necessary to implement a culvert project. Under the National Environmental Policy Act, an assessment of each project's impact on the environment must be completed before construction can commence. If the assessment indicates that an endangered species may be adversely affected by the project, Section 7 of the Endangered Species Act of 1973 requires the agency to consult with the appropriate authority—generally the National Marine Fisheries Service for anadromous fish and the Fish and Wildlife Service for other species— to reach agreement on how to mitigate the disturbance. BLM and the Forest Service have entered into an agreement with the consulting agencies to expedite the process through streamlined procedures. However, according to agency representatives, factors such as staffing shortages and turnover, as well as differing interpretations of the streamlining guidance, have prevented the revised consultation process from producing the efficiencies desired by the agencies, and it is currently under review. In addition to consultation: the U.S. Army Corps of Engineers requires a permit for fill or excavation in waterways and wetlands; Oregon requires a "removal and fill" permit for in-stream construction; and Washington requires a hydraulic project permit to engage in construction activities within streams. According to information provided by district and forest offices for 56 completed culvert projects, the clearance and permit process is the most time-consuming phase of a culvert project, ranging from a low of 4 weeks to a high of 113 weeks, for an average of about 31 weeks. Second, BLM's and the Forest Service's efforts to eliminate barrier culverts are restricted, according to agency officials, by the limited number of engineers available to design them, and more specifically, the few with experience in designing culverts that meet current fish passage requirements. As a result, district and forest officials speculate that additional hiring or contracting with engineering firms for culvert design work may be necessary if greater emphasis is placed on reducing the barrier culvert backlog. Agency officials also emphasized the need for more fish biologists, hydrologists, and other professionals with fish passage design skills. According to time frame information provided by district and forest offices for 56 completed culvert projects, the design process is the second most time-consuming phase of a project, ranging from a low of 4 weeks to a high of 78 weeks to complete, for an average of about 19 weeks. Finally, BLM and Forest Service officials stated that their efforts to eliminate barrier culverts are limited by a short seasonal "window of opportunity" of about 3 months during which fish passage restoration work—that is, construction work within streams— can occur. Oregon and Washington have established these time frames to minimize the impacts to important fish, wildlife, and habitat resources. The summer to fall in- stream work time frames, when construction is most feasible due to low water flow, most commonly run from July to September, but could be as narrow as July 15 to August 15, or just 1 month. According to time frame information provided by district and forest offices for 56 completed culvert projects, construction is the least time-consuming phase of a project, ranging from a low of 4 weeks to a high of 61 weeks to complete, for an average of about 10 weeks. According to BLM and Forest Service officials, the minimum time necessary to complete a barrier culvert project, if all phases of the project are completed in the shortest possible time frame, is about 1 year. However, due to the factors discussed above, projects are more likely to take over a year to complete. The consequences of a delay caused by any one of the factors have a cascading effect on the project completion date. For example, according to agency officials, they generally begin a project by initiating the clearance and permit process and collecting some preliminary engineering information. However, if project clearances are not obtained or imminent by March when project funding decisions are made, construction may be put off to the next year, rather than committing funds to a project that may not be ready for implementation within the seasonal time frames. Similarly, project clearances may be completed timely, but the project may be delayed if an engineer with fish passage design experience is not available. And, if all phases of a project, including construction contracts, are not in place in time to complete construction within the state-mandated stream construction time frames, the project must be put off until the next season. According to the information provided by district and forest offices for 56 projects, the total time to complete a project ranged from a low of 16 weeks to a high of 186 weeks, for an average of 60 weeks. BLM and the Forest Service completed 141 projects to restore fish passage for anadromous fish at barrier culverts from fiscal year 1998 through July 2001 and opened access to an estimated 171 miles of fish habitat. However, because neither agency requires systematic monitoring of these completed projects, the actual extent of improved fish passage is largely unverified. According to agency officials, current culvert fish passage design standards are based on scientific research that considers such factors as the swimming ability of fish at various life stages and the velocity of water. Therefore, the officials assume that fish can migrate into the newly accessible habitat through culverts built to these standards. Furthermore, agency officials cite a lack of funds and available staff as reasons for not requiring systematic post-project monitoring. While district and forest offices may monitor projects on a limited or ad hoc basis, whether both juvenile and adult fish can actually pass through the restored culvert or actually inhabit the upstream areas is not systematically determined. However, the Oregon and Washington state fish passage restoration programs, as well as other local efforts, require systematic post-project monitoring to determine the most effective methods for improving fish passage under various conditions. Without such monitoring, neither the Forest Service nor BLM can ensure that the federal moneys expended for improving fish passage are actually achieving the intended purpose. As shown in figure 5, BLM reported 68 projects completed to restore fish passage for anadromous fish at barrier culverts from fiscal year 1998 through August 1, 2001, opening access to an estimated 95 miles of fish habitat. During the same time frame, the Forest Service reported 73 projects completed to restore fish passage for anadromous fish at barrier culverts and opened access to an estimated 76 miles of fish habitat. The actual extent of improved fish passage is largely unknown, however, because neither agency requires systematic post-project monitoring of completed projects. Forest and district offices undertake a wide range of activities in and around streams to restore aquatic habitat. These activities include eliminating fish passage barrier culverts, as well as other activities such as stabilizing eroding stream banks, planting vegetation, and placing desirable woody debris and boulders into the streams. While each forest and district office is required to conduct monitoring of selected restoration activities, neither agency specifically requires barrier culvert projects be monitored. Therefore, restoration projects selected by district and forest offices for monitoring may or may not include barrier culvert passage projects. Consequently, the agencies do not systematically determine whether fish can actually pass through repaired or replaced culverts. Furthermore, while the miles of habitat theoretically made accessible to fish is estimated, the extent to which fish actually inhabit that stream area is not routinely determined. BLM and Forest Service officials stated that monitoring all culvert fish passage projects would be a costly and time-consuming effort for their already limited staff. Therefore, district and forest staff stated that culvert project follow-up is generally ad hoc in nature. For example, subsequent to project completion, the designing engineer will likely look to see if water appears to be flowing through the culvert as designed, or the fish biologist that helped plan a project may walk up the stream side looking for egg beds to ascertain the presence of fish. However, according to agency officials, a formalized, comprehensive measurement of results, for example, requiring engineers to measure water flows through all completed culverts or biologists to count egg beds in every area of a newly opened habitat is not feasible at current funding and staffing levels. One forest official stated that ideally, every project should have monitoring funds included with the project funds to verify effectiveness, but funding realities have not made this possible. According to BLM and Forest Service officials, in the absence of systematic monitoring, they assume that culverts built to current standards will allow fish migration into the newly accessible habitat. Current culvert design standards are based on scientific research that considers important factors such as the swimming capabilities of fish at various life stages and the velocity of water to guide engineers in building culverts that will allow passage of juvenile to adult fish. BLM primarily follows the standards published by the Oregon State Department of Fish and Wildlife, and the Forest Service follows those same Oregon standards or the Washington Department of Fish and Wildlife's standards, depending on the project's location. Where appropriate, the current standards endorse the use of open bottom culverts that simulate natural stream bottoms and slopes and culvert widths that adhere to the stream's natural width, mimicking the stream's natural features to the greatest extent possible. However, even culvert projects built to current standards may not necessarily result in improved fish passage. District and forest officials characterized culvert fish passage design as an evolving area of study. For example, according to federal and state officials, retrofitting culverts by adding staggered or perforated panels inside to slow down water velocities is a complex design process only applicable in limited circumstances. Another area of concern, according to Forest Service officials, is the length of culverts because questions remain as to how far fish will swim inside a dark culvert. Furthermore, during our field visits to completed culvert project sites, we observed culverts that, according to agency officials, continued to be barriers to fish passage, including a retrofitted culvert that did not sufficiently slow water flow, a replaced pipe that did not allow juvenile fish passage, and a culvert that allowed water to flow under it rather than through it. Systematic post-project monitoring is a requirement of the Oregon and Washington state fish passage restoration efforts on state lands, as well as cooperative local programs on other lands within the states and has helped these programs to identify ways to enhance the effectiveness of fish passage projects. According to an Oregon Department of Fish and Wildlife official, in fiscal year 1999 the state implemented a protocol for systematically monitoring and documenting the results of culvert retrofit projects to improve fish passage. The protocol, jointly developed by Oregon's Department of Fish and Wildlife and Department of Transportation, requires monitoring the movement of water in and around retrofitted culverts to determine if fish passage is improved. In the first year of implementation, the agencies systematically monitored selected culverts retrofitted in 1998 within certain state regions, including visual inspections and water velocity measurements taken at different times to assess how well the retrofit designs slowed water velocity. The monitoring results indicated the retrofit designs, while needing some adjustments, improved fish passage by slowing water and reducing culvert entry jump heights for fish. According to the state official, the agencies are currently developing fish passage monitoring protocols for culverts that have been replaced rather than retrofitted. The Washington Department of Fish and Wildlife, in partnership with the state Department of Transportation, developed and implemented a three- level culvert and fish use evaluation procedure for all culvert retrofit or replacement projects funded by the state's Fish Passage Barrier Removal Program. Agreeing that the best management practice is to avoid "walking away" from a fish passage project once construction is complete, the agencies are systematically assessing culvert projects for design, durability and efficiency; determining if fish use the newly available habitat; and troubleshooting problems identified. The three-level evaluation involves the following steps: First, fish use before and after project completion is determined, and each completed project is evaluated for durability, efficiency, and design flaws, which are corrected during the year following project completion. The culvert is removed from the monitoring list if fish passage is verified and no additional monitoring is required. Second, for culverts where fish passage is not occurring, additional monitoring for fish presence is implemented, and if necessary, other methods to support fish recovery, including supplementation such as planting of hatchery fish, fishing restrictions, or stream habitat improvement projects, are implemented. Third, selected culverts are studied to determine the overall impact on fish populations. Evaluation results as of April 2001 indicated most habitats reclaimed through culvert projects were immediately populated by fish; however, varied responses on some streams require additional monitoring and possibly further enhancement efforts to promote fish recovery. In addition to the state monitoring efforts, local fish passage restoration plans may also require systematic monitoring of project results to ensure they are successful. For example, Oregon's Rogue River Basin Fish Access Team, composed of local stakeholders, watershed councils, and state and federal agencies (including BLM and the Forest Service), has established a basinwide strategic plan to cooperatively prioritize fish passage barriers, secure funding for projects, implement passage enhancement projects, and monitor the success of projects. Specifically, to participate in the program, a monitoring plan must be completed for each project before the project begins. The monitoring plan must determine whether the project was implemented as planned, was effective in solving fish passage problems, and contributed to the expanding fish distribution across the Rogue River basin. Potential techniques suggested to determine effectiveness include spawning and snorkeling (underwater observation) surveys. As their actions demonstrate, Oregon, Washington, and other entities consider systematic monitoring to be an important tool to determine the most effective methods for improving fish passage under various conditions. The systematic monitoring allows the entities to incorporate this knowledge into future restoration planning and implementation. Their varied approaches reflect the range of methods available for monitoring— that is, monitoring improvements to water flow at selected culverts of a specific design type, verifying the actual presence of fish in a newly opened habitat, or developing monitoring plans for specific projects. While each monitoring approach requires a commitment of agency staff and funding to implement, they all provide valuable information for targeting future expenditures on culvert passage restoration methods that most benefit fish. Oregon and Washington's monitoring efforts have helped them to assess the success of various culvert passage restoration methods and identified methods that require adjustments or further study to determine their effectiveness. Without such systematic monitoring programs, neither the Forest Service nor BLM can ensure that the federal moneys expended for improving fish passage are actually achieving the intended purpose. BLM and the Forest Service are faced with the daunting task of addressing a large backlog of fish passage barrier culverts. Given the limited funding available for fish passage projects and the various factors that affect the agencies' ability to complete projects quickly, eliminating barrier culverts will be a long, costly effort. While both agencies are already using culvert assessment information to help them prioritize projects, that is just the beginning of the barrier elimination process. Ultimately, the culvert projects selected for implementation—whether retrofitting existing culverts, replacing culverts, or removing culverts—must achieve the objective of restoring fish passage. Systematic monitoring of completed projects would provide the agencies with information to help them identify which methods actually work best under various circumstances and evidence that their expenditures have actually improved fish passage. Although monitoring would divert funding and staff from the implementation of culvert passage improvement projects, state monitoring programs have demonstrated the value of monitoring to assess the effectiveness of barrier culvert projects and to allow these entities to incorporate this knowledge into future planning and implementation efforts. To determine whether fish passage restoration projects are achieving their intended purpose, we recommend that the Director of BLM and the Chief of the Forest Service each develop guidance for systematically monitoring completed barrier removal projects. This guidance should establish procedures that will allow the agencies to cost-effectively measure and document improvements to fish passage. We provided the Department of the Interior and the Forest Service with a draft of this report for comment prior to issuance. The agencies generally agreed with the content of the report and concurred with our recommendation for systematic monitoring so long as agency officials have the discretion to determine the monitoring approaches and methodologies that will most benefit them in planning and implementing future fish passage projects. We recognize that the agencies will have to exercise discretion in developing this guidance, but they need to ensure that they implement a monitoring program that cost-effectively measures and documents improvements to fish passage. The agencies also provided certain technical clarifications, which we incorporated, as appropriate, in the report. Copies of the agencies' comments are included as appendixes II and III. We conducted our review from March 2001 through October 2001 in accordance with generally accepted government auditing standards. Details of our scope and methodology are discussed in appendix IV. We are sending copies of this report to the Director of the Bureau of Land Management and the Chief of the Forest Service. We will also provide copies to others on request. If you or your staff have any question about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix V. The Bureau of Land Management (BLM) and the Forest Service are assessing culverts on their lands in Oregon and Washington to identify barriers to fish passage. Neither agency has completed this effort, but each of the 10 district and 19 forest offices provided their assessment results as of August 1, 2001. In addition, each district and forest office provided the estimated total number of culverts on fish-bearing streams, an estimated number of culverts not yet assessed that may be barriers, and an estimated cost to restore fish passage through barrier culverts. BLM districts reported that they have assessed 1,152 culverts for fish passage and identified 414 barriers. In addition, the districts estimate that 282 additional barrier culverts may exist. BLM estimates that the cost to restore fish passage at all 696 of these barrier culverts could be about $46 million, as shown in table 1. Forest Service national forest offices reported that they have assessed 2,986 culverts for fish passage and identified 2,160 barriers. In addition, they estimate that an almost equal number, about 2,645, of additional barrier culverts may exist. The Forest Service estimates that the cost to restore fish passage at all 4,805 barrier culverts could be about $331 million, as shown in table 2. To determine the number of culverts that may impede fish passage on BLM and Forest Service lands in Oregon and Washington, we interviewed officials and gathered documentation from BLM's Oregon State Office and the Forest Service's Region 6 office, both located in Portland, Oregon. Specifically, we gathered and analyzed information on the number and maintenance status of culverts located in the 10 BLM districts under Oregon State Office jurisdiction and the 19 national forests under Region 6 jurisdiction and the costs and time frames associated with the repair of barrier culverts. We conducted site visits at four BLM district offices in Oregon—Coos Bay, Eugene, Medford, and Prineville—and at nine national forest offices—Deschutes, Ochoco, Rogue River, Siskiyou, Siuslaw, Umatilla, and Williamette in Oregon; and Gifford Pinchot and Olympic in Washington. We met with district and forest office staff and collected information on their culvert inventories and assessment and prioritization efforts and observed completed and potential culvert restoration projects. To identify the factors affecting the agencies' ability to restore passage through culverts acting as barriers to fish, we interviewed BLM and Forest Service headquarters officials, Oregon State Office and Region 6 officials, and district and forest office staff and reviewed policies, procedures, and practices for repairing, replacing, or removing barrier culverts. We gathered and analyzed funding information for 141 anadromous fish passage culvert projects completed in Oregon and Washington from fiscal year 1998 through July 2001, including the amount and source of funds expended for each project. We analyzed detailed time line information for 56 of the 141 projects that included complete start and finish dates for the three main phases of each project—federal and state clearances, design and engineering, and construction. We interviewed agency officials and gathered documentation to identify the factors that affect project time frames and to determine how these factors limit the number of culvert projects that can be completed annually. To determine the results of the agencies' efforts to restore fish passage, we gathered and analyzed information on the number of (1) culverts repaired, replaced, or removed to improve anadromous fish passage and (2) miles of habitat restored from fiscal year 1998 through August 1, 2001, by district and forest offices under Oregon State Office and Region 6 jurisdiction. We interviewed BLM and Forest Service headquarters, state and regional office, and district and forest office officials and reviewed documentation to determine whether regulations, policies, and procedures required systematic monitoring of the effectiveness of the culvert restoration projects. To identify state efforts to monitor the outcome of fish passage projects, we interviewed Oregon and Washington state officials and reviewed regulations, policies, and procedures and monitoring reports provided by the state agencies with fish passage restoration responsibilities. We conducted our work from March 2001 through October 2001 in accordance with generally accepted government auditing standards. In addition to the above, Leo Acosta, Kathy Colgrove-Stone, and Brad Dobbins made key contributions to this report. | The Bureau of Land Management and the Forest Service manage more than 41 million acres of federal lands in Oregon and Washington, including 122,000 miles of roads that use culverts--pipes or arches that allow water to flow from one side of the road to the other. Many of the streams that pass through these culverts are essential habitat for fish and other aquatic species. More than 10,000 culverts exist on fish-bearing streams in Oregon and Washington, but the number that impede fish passage is unknown. Ongoing agency inventory and assessment efforts have identified nearly 2,600 barrier culverts, but agency officials estimate that more than twice that number may exist. Although the agencies recognize the importance of restoring fish passage, several factors inhibit their efforts. Most significantly, the agencies have not made enough money available to do all the necessary culvert work. In addition, the often lengthy process of obtaining federal and state environmental clearances and permits, as well as the short seasonal "window of opportunity" to do the work, affects the agencies' ability to restore fish passages quickly. Furthermore, the shortage of experienced engineering staff limits the number of projects that can be designed and completed. BLM and the Forest Service have completed 141 culvert projects to remove barriers and to open an estimated 171 miles of fish habitat from fiscal year 1998 through 2000. Neither agency, however, knows the extent to which culvert projects ultimately improve fish passage because they don't require systematic post-project monitoring to measure the outcomes of their efforts. |
EPA’s enforcement program depends heavily upon inspections by regional or state enforcement staff as the primary means of detecting violations and evaluating overall facility compliance. Thus, the quality and the content of the agency’s and states’ inspections, and the number of inspections undertaken to ensure adequate coverage, are important indicators of the enforcement program’s effectiveness. However, as we reported in 2000, EPA’s regional offices varied substantially on the actions they take to enforce the Clean Water Act and Clean Air Act. Consistent with earlier observations of EPA’s Office of Inspector General and internal agency studies, we found these variations in regional actions reflected in the (1) number of inspections EPA and state enforcement personnel conducted at facilities discharging pollutants within a region, (2) number and type of enforcement actions taken, and (3) the size of the penalties assessed and the criteria used in determining the penalties assessed. For example, as figure 1 indicates, the number of inspections conducted under the Clean Air Act in fiscal year 2000 compared with the number of facilities in each region subject to EPA’s inspection under the act varied from a high of 80 percent in Region 3 to a low of 27 percent in Regions 1 and 2. While the variations in enforcement raise questions about the need for greater consistency, it is also important to get behind the data to understand the cause of the variations and the extent to which they reflect a problem. For example, EPA attributed the low number of inspections by its Region 5, in Chicago, to the regional office’s decision at the time to focus limited resources on performing detailed and resource-intensive investigations of the region’s numerous electric power plants, rather than conducting a greater number of less intensive inspections. We agree that regional data can be easily misinterpreted without the contextual information needed to clarify whether variation in a given instance is inappropriate or whether it reflects the appropriate exercise of flexibility by regions and states to tailor their priorities to their individual needs and circumstances. In this regard, we recommended that it would be appropriate for EPA to (1) clarify which aspects of the enforcement program it expects to see implemented consistently from region to region and which aspects may appropriately be subject to greater variation and (2) supplement region-by-region data with contextual information that helps to explain why variations occur and thereby clarify the extent to which variations are problematic. Our findings were also consistent with the findings of EPA’s Inspector General and OECA that regions vary in the way they oversee state- delegated programs. In this regard, contrary to EPA policy, some regions did not (1) conduct an adequate number of oversight inspections of state programs, (2) sufficiently encourage states to consider economic benefit in calculating penalties, (3) take more direct federal actions where states were slow to act, and (4) require states to report all significant violators. Regional and state officials generally indicated that it was difficult for them to ascertain the extent of variation in regional enforcement activities, given their focus on activities within their own geographic environment. However, EPA headquarters officials responsible for the air and water programs noted that such variation is fairly commonplace and does pose problems. The director of OECA’s water enforcement division, for example, told us that, in reacting to similar violations, enforcement responses in certain regions are stronger than they are in others and that such inconsistencies have increased. Similarly, the director of OECA’s air enforcement division said that, given the considerable autonomy of the regional offices, it is not surprising that variations exist in how they approach enforcement and state oversight. In this regard, the director noted, disparities exist among regions in the number and quality of inspections conducted and in the number of permits written in relation to the number of sources requiring permits. In response to these findings, a number of regions have begun to develop and implement state audit protocols, believing that having such protocols could help them review the state programs within their jurisdiction with greater consistency. Here, too, regional approaches differ. For example: Region 1, in Boston, has adopted a comprehensive “multimedia” approach in which it simultaneously audits all of a state’s delegated environmental programs. Region 3, in Philadelphia, favors a more targeted approach in which air, water, and waste programs are audited individually. In Region 5, in Chicago, the office’s air enforcement branch chief said that he did not view an audit protocol as particularly useful, noting that he prefers regional staff to engage in joint inspections with states to assess the states’ performance in the field and to take direct federal action when a state action is inadequate. We recognize the potential of these protocols to achieve greater consistency by a region in its oversight of its states, and the need to tailor such protocols to meet regional concerns. However, we also believe that EPA guidance on key elements that should be common to all protocols would help engender a higher level of consistency among all 10 regions in how they oversee states. While EPA’s data show variations in key measures associated with the agency’s enforcement program, they do little to explain the causes of the variations. Without information on causes, it is difficult to determine the extent to which variations represent a problem, are preventable, or reflect appropriate regional and state flexibility in applying national program goals to unique circumstances. Our work identified the following causes: (1) differences in philosophical approaches to enforcement, (2) incomplete and inaccurate national enforcement data, and (3) an antiquated workforce planning and allocation system. While OECA has issued policies, memorandums, and other documents to guide regions in their approach to enforcement, the considerable autonomy built into EPA’s decentralized, multilevel organizational structure allows regional offices considerable latitude in adapting headquarters’ direction in a way they believe best suits their jurisdiction. The variations we identified often reflect different enforcement approaches in determining whether the region should (1) rely predominantly on fines and other traditional enforcement methods to deter noncompliance and to bring violators into compliance or (2) place greater reliance on alternative strategies, such as compliance assistance (workshops, site visits, and other activities to identify and resolve potential compliance problems). Regions have also differed on whether deterrence could be achieved best through a small number of high-profile, resource-intensive cases or a larger number of smaller cases that establish a more widespread, albeit lower profile, enforcement presence. Further complicating matters are the wide differences among states in their enforcement approaches and the various ways in which regions respond to these differences. Some regions step more readily into cases when they consider a state’s action to be inadequate, while other regions are more concerned about infringing on the discretion of states that have been delegated enforcement responsibilities. While all of these approaches may be permissible, EPA has experienced problems in identifying and communicating the extent to which variation either represents a problem or the appropriate exercise of flexibility by regions and states to apply national program goals to their unique circumstances. OECA needs accurate and complete enforcement data to determine whether regions and states are consistently implementing core program requirements and, if not, whether significant variations in meeting these requirements should be corrected. The region or the state responsible for carrying out the enforcement program is responsible for entering data into EPA’s national databases. However, both the quality of and quality controls over these data were criticized by state and regional staff we interviewed. “managers in the regions and in OECA headquarters have become increasingly frustrated that they are not receiving from the reports and data analyses they need to manage their programs… has been less attention to the data in the national systems, a commensurate decline in data quality, and insufficient use of data by enforcement/compliance managers.” Consistent with our findings and recommendations, EPA’s Office of Inspector General recently reported that, “OECA’s 2005 publicly-reported GPRA performance measures do not effectively characterize changes in compliance or other outcomes because OECA lacks reliable compliance rates and other reliable outcome data. In the absence of compliance rates, OECA reports proxies for compliance to the public and does not know if compliance is actually going up or down. As a result, OECA does not have all the data it needs to make management and program decisions. What is missing most, the biggest gap, is information about compliance rates. OECA cannot demonstrate the reliability of other measures because it has not verified that estimated, predicted, or facility self-reported outcomes actually took place. Some measures do not clearly link to OECA’s strategic goals. Finally, OECA frequently changed its performance measures from year to year, which reduced transparency.” For example, between fiscal years 1999-2005, OECA reported on a low of 23 performance measures to a high of 69 measures, depending on the fiscal year. Although EPA is working to improve its data, the problems are extensive and complex. For example, the Inspector General recently reported that OECA cannot generate programmatic compliance information for five of six program areas; lacks knowledge of the number, location, and levels of compliance for a significant portion of its regulated universe; and concentrates most of its regulatory activities on large entities and knows little about the identities or cumulative impact of small entities. Consequently, the Inspector General reported, OECA currently cannot develop programmatic compliance information, adequately report on the size of the universe for which it maintains responsibility, or rely on the regulated universe data to assess the effectiveness of enforcement strategies. As we reported, EPA’s process for budgeting and allocating resources does not fully consider the agency’s current workload, either for specific statutory requirements, such as those included in the Clean Water Act, or for broader goals and objectives in the agency’s strategic plan. Instead, in preparing its requests for funding and staffing, EPA makes incremental adjustments, largely based on historical precedents, and thus its process does not reflect a bottom-up review of the nature or distribution of the current workload. While EPA has initiated several projects over the past decade to improve its workload and workforce assessment systems, it continues to face major challenges in this area If EPA is to substantially improve its resource planning, we reported, it must adopt a more rigorous and systematic process for (1) obtaining reliable data on key workload indicators, such as the quality of water in particular areas, which can be used to budget and allocate resources, and (2) designing budget and cost accounting systems that are able to isolate the resources needed and allocated to key enforcement activities. Without reliable workforce information, EPA cannot ensure consistency in its enforcement activities by hiring the right number or type of staff or allocating existing staff resources to meet current or future needs. In this regard, since 1990, EPA has hired thousands of employees without systematically considering the workforce impact of changes in environmental statutes and regulations, technological advances in affecting the skills and expertise needed to conduct enforcement actions, or the expansion in state environmental staff. EPA has yet to factor these workforce changes into its allocation of existing staff resources to its headquarters and regional offices to meet its strategic goals. Consequently, should EPA either downsize or increase its enforcement and compliance staff, it would not have the information needed to determine how many employees are appropriate, what technical skills they must have, and how best to allocate employees among strategic goals and geographic locations in order to ensure that reductions or increases could be absorbed with minimal adverse impacts in carrying out the agency’s mission. Over the past several years, EPA has initiated or planned several actions to improve its enforcement program. We believe that a few of these actions hold particular promise for addressing inconsistencies in regional enforcement activities. These actions include (1) the creation of a State Review Framework, (2) improvements in the quality of enforcement data, and (3) enhancements to the agency’s workforce planning and allocation system. The State Review Framework is a new process for conducting performance reviews of enforcement and compliance activities in the states (as well as for nondelegated programs implemented by EPA regions). These reviews are intended to provide a mechanism by which EPA can ensure a consistent level of environmental and public health protection across the country. OECA is in the second year of a 3-year project to make State Review Framework reviews an integral part of the regional and state oversight and planning process and to integrate any regional or state corrective or follow-up actions into working agreements between headquarters, regions, and states. It is too early to assess whether the process will provide an effective means for ensuring more consistent enforcement actions and oversight of state programs to help ensure a level playing field for the regulated community across the country. Issues that still need to be addressed include how EPA will assess states’ implementation of alternative enforcement and compliance strategies, such as strategies to assist businesses in their efforts to comply with environmental regulations; encourage businesses to take steps to reduce pollution; offer incentives (e.g., public recognition) for businesses that demonstrate good records of compliance; and encourage businesses to participate in programs to audit their environmental performance and make the results of these audits and corrective actions available to EPA, other environmental regulators, and the public. Regardless of other improvements EPA makes to the enforcement program, it needs to have sufficient environmental data to measure changes in environmental conditions, assess the effectiveness of the program, and make decisions about resource allocations. Through its Environmental Indicators Initiative and other efforts, EPA has made some progress in addressing critical data gaps in the agency’s environmental information. However, the agency still has a long way to go in obtaining the data it needs to manage for environmental results and needs to work with its state and other partners to build on its efforts to fill critical gaps in environmental data. Filling such gaps in EPA’s knowledge of environmental conditions and trends should, in turn, translate into better approaches in allocating funds to achieve desired environmental results. Such knowledge will be useful in making future decisions related to strategic planning, resource allocations, and program management. Nevertheless, most of the performance measures that EPA and the states are still using focus on outputs rather than on results, such as the number of environmental pollution permits issued, the number of environmental standards established, and the number of facilities inspected. These types of measures can provide important information for EPA and state managers to use in managing their programs, but they do not reflect the actual environmental outcomes that EPA must know in order to ensure that resources are being allocated in the most cost-effective ways to improve environmental conditions and public health. EPA also has worked with the states and regional offices to improve enforcement data in its Permit Compliance System and believes that its efforts have improved data quality. EPA officials said that the system will be incorporated into the Integrated Compliance Information System, which is being phased in this year. According to information EPA provided, the modernization effort will identify the data elements to be entered and maintained by the states and regions and will include additional data entry for minor facilities and special regulatory program areas, such as concentrated animal feeding operations, combined sewer overflows, and storm water. Regarding the National Water Quality Inventory, the Office of Water recently began advocating the use of standardized, probability-based, statistical surveys of state waters so that water quality information would be comparable among states and from year-to-year. While these efforts are steps in the right direction, progress in this area has been slow and the benefits of initiatives currently in the discussion or planning stages are likely to be years away from realization. For example, initiatives to improve EPA’s ability to manage for environmental results are essentially long-term. They will require a long-term commitment of management attention, follow-through, and support—including the dedication of appropriate and sufficient resources—for their potential to be fully realized. A number of similar initiatives in the past have been short-lived and unproductive in terms of lasting contributions to improved performance management. The ultimate payoff will depend on how fully EPA’s organization and management support these initiatives and the extent to which identified needs are addressed in a determined, systematic, and sustained fashion over the next several years. Since the late 1990s, EPA has made progress in improving the management of its human capital. EPA’s human capital strategic plan was designed to ensure a systematic process for identifying the agency’s human capital requirements to meet strategic goals. Furthermore, EPA’s strategic planning includes a cross-goal strategy to link strategic planning efforts to the agency’s human capital strategy. Despite such progress, effectively implementing a human capital strategic plan remains a major challenge. Consequently, the agency needs to continue monitoring progress in developing a system that will ensure a well-trained and motivated workforce with the right mix of skills and experience. In this regard, the agency still has not taken the actions that we recommended in July 2001 to comprehensively assess its workforce—how many employees it needs to accomplish its mission, what and where technical skills are required, and how best to allocate employees among EPA’s strategic goals and geographic locations. Furthermore, as previously mentioned, EPA’s process for budgeting and allocating resources does not fully consider the agency’s current workload. With prior years’ allocations as the baseline, year-to-year changes are marginal and occur in response to (1) direction from the Office of Management and Budget and the Congress, (2) spending caps imposed by EPA’s Office of the Chief Financial Officer, and (3) priorities negotiated by senior agency managers. EPA’s program offices and regions have some flexibility in realigning resources based on their actual workload, but the overall impact of these changes is also minor, according to agency officials. Changes at the margin may not be sufficient because both the nature and distribution of the workload have changed as the scope of activities regulated has increased and as EPA has taken on new responsibilities while shifting others to the states. For example, controls over pollution from storm water and animal waste at concentrated feeding operations have increased the number of regulated entities by hundreds of thousands and required more resources in some regions of the country. However, EPA may be unable to respond effectively to changing needs and constrained resources because it does not have a system in place to conduct periodic “bottom-up” assessments of the work that needs to be done, the distribution of the workload, or the staff and other resource needs. Mr. Chairman, to its credit, EPA has initiated a number of actions to improve its enforcement activities and has invested considerable time and resources to make these activities more effective and efficient. While we applaud EPA’s actions, they have thus far achieved only limited success and illustrate both the importance and the difficulty of addressing the long-standing problems in ensuring the consistent application of enforcement requirements, fines and penalties for violations of requirements, and the oversight of state environmental programs. To finish the job, EPA must remain committed to continuing the steps that it has already taken. In this regard, given the difficulties of the improvements that EPA is attempting to make and the time likely to be required to achieve them, it is important that the agency remain vigilant. It needs to guard against any erosion of its efforts by factors that have hampered past efforts to improve its operations, such as changes in top management and priorities and constraints on available resources. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have. If you have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Major contributors to this testimony include Ed Kratzer, John C. Smith, Ralph Lowry, Ignacio Yanes, Kevin Bray, and Carol Herrnstadt Shulman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Environmental Protection Agency (EPA) enforces the nation's environmental laws and regulations through its Office of Enforcement and Compliance Assurance (OECA). While OECA provides overall direction on enforcement policies and occasionally takes direct enforcement action, many enforcement responsibilities are carried out by EPA's 10 regional offices. In addition, these offices oversee the enforcement programs of state agencies that have been delegated the authority to enforce federal environmental protection regulations. This testimony is based on GAO's reports on EPA's enforcement activities issued over the past several years and on observations from ongoing work that is being performed at the request of the Senate Committee on Environment and Public Works, and the Subcommittee on Interior, Environment and Related Agencies, House Committee on Appropriations. GAO's previous reports examined the (1) consistency among EPA regions in carrying out enforcement activities, (2) factors that contribute to any inconsistency, and (3) EPA's actions to address these factors. Our current work examines how EPA, in consultation with regions and states, sets priorities for compliance and enforcement and how the agency and states determine respective compliance and enforcement roles and responsibilities and allocate resources for these purposes. EPA regions vary substantially in the actions they take to enforce environmental requirements, according to GAO's analysis of key management indicators that EPA headquarters uses to monitor regional performance. These indicators include the number of inspections performed at regulated facilities and the amount of penalties assessed for noncompliance with environmental regulations. In addition, the regions differ substantially in their overall strategies to oversee states within their jurisdictions. For example, contrary to EPA policy, some regions did not require states to report all significant violators, while other regions adhered to EPA's policy in this regard. GAO identified several factors that contribute to regional variations in enforcement. These factors include (1) differences in philosophy among regional enforcement staff about how best to secure compliance with environmental requirements; (2) incomplete and unreliable enforcement data that impede EPA's ability to accurately determine the extent to which variations occur; and (3) an antiquated workforce planning and allocation system that is not adequate for deploying staff in a manner to ensure consistency and effectiveness in enforcing environmental requirements. EPA recognizes that while some variation in environmental enforcement is necessary to reflect local conditions, core enforcement requirements must be consistently implemented to ensure fairness and equitable treatment. Consequently, similar violations should be met with similar enforcement responses regardless of geographic location. In response to GAO findings and recommendations, EPA has initiated or planned several long-term actions that are intended to achieve greater consistency in state and regional enforcement actions. These include (1) a new State Review Framework process for measuring states' performance of core enforcement activities, (2) a number of initiatives to improve the agency's compliance and enforcement data, and (3) enhancements to the agency's workforce planning and allocation system to improve the agency's ability to match its staff and technical capabilities with the needs of individual regions. However, these actions have yet to achieve significant results and will likely require a number of years and a steady top-level commitment of staff and financial resources to substantially improve EPA's ability to target enforcement actions in a consistent and equitable manner. |
Most Medicare beneficiaries receive their care on an FFS basis, with providers submitting claims for payment for each service provided. In addition to the Part A/B and DME MACs that process and pay claims, CMS also employs other types of contractors to specifically address fraud and improper payments. These include: Recovery Audit contractors (RA), which review claims postpayment in four RA jurisdictions to identify improper payments and Zone Program Integrity Contractors (ZPIC), which review claims on a pre- and postpayment basis in seven ZPIC jurisdictions to identify potential fraud. All of these contractors use data analysis to identify providers who bill improperly, whether by mistake or intentionally, to help target their claims review. CMS has expanded its Integrated Data Repository, which was set up to integrate Medicare and Medicaid claims, beneficiary, provider, and other data, and is currently populated with 5 years of historical Part A, Part B, and Part D paid claims data. CMS’s contractors can use these data to analyze previously undetected indicators of aberrant billing activity throughout the claims processing cycle. CMS intends to develop shared data models and is pursuing data sharing and matching agreements with other federal agencies to identify potential fraud, waste, and abuse throughout federal health care programs. CMS has set expectations that RAs and ZPICs will provide information on types of potentially problematic claims to help the agency identify vulnerabilities. CMS has also recently developed a “Fraud Prevention System” which uses predictive modeling technology to screen all FFS claims before payment is made. Claims are streamed through the Fraud Prevention System prior to payment and analyzed on the basis of algorithms that include other information, such as past billing, to identify patterns of potentially fraudulent billing by providers. The billing is prioritized for risk of fraud, with the highest-priority cases investigated by ZPICs. Prior to applying predictive models to claims prepayment, CMS tests the algorithms to try to ensure that resources are targeted to the highest-risk claims or providers while payment of claims to legitimate providers continues to occur without disruption. Consistent with Medicare law, CMS sets national coverage and payment policies regarding when and how services will be covered by Medicare, as well as coding and billing requirements for claims.developed national payment policies related to MUEs to limit potentially CMS has improper and excess payments to providers for many services, especially those that are prone to potential fraud or that result from billing errors. A CMS MUE Workgroup, which includes staff from CMS and the MUE contractor, is responsible for developing the national MUE limits, in consultation with the medical community. MUE limits are developed as per-day limits on the number of units of a given service or medical product that can be provided by the same physician to the same beneficiary. The limits are developed on the basis of coding conventions defined in the American Medical Association’s (AMA) Current Procedural Terminology manual, national and local policies and edits, coding guidelines developed by national societies, analysis of standard medical and surgical practice, as well as an analysis of current provider billing practices. Prior to their implementation, proposed MUE limits are released for a review and comment period to the AMA, national medical/surgical societies, and other national health care organizations. However, unpublished MUEs are not released for comment. The MUE files are updated quarterly and new limits may be added at these times. Although MUEs were developed as limits on the number of units of a service a provider could bill for a beneficiary in a single day, as we previously reported they are not implemented as such. Specifically, they do not look at total units on all claims from one provider for the same beneficiary across an entire day, and the limits may therefore be exceeded. A claim can have multiple lines and providers may bill multiple units of the same service for the same beneficiary on the same day on multiple lines of a claim. In processing the claim, contractors’ automated systems only examine the number of units on each claim line. If the number of units on the claim line exceeds the MUE limit, the entire claim line is denied. However, as long as the units on a claim line are at or below the MUE limit, they are paid. Thus, the automated claims- processing systems allow the MUE per-day limits to be exceeded for a beneficiary if providers bill multiple units of the same service on multiple claim lines. The systems also allow limits to be exceeded for a beneficiary if a provider bills for multiple units of the same service performed on the same day on different claims. When claiming multiple units of the same service for one beneficiary, providers may, but are not required to, include a “modifier”— a special code that indicates why the additional units are medically necessary. MACs may develop local coverage policies as long as these policies are consistent with national policies. To implement these local policies, some MACs have developed local edits for certain services. Similar to the national MUEs, these local edits set limits on the maximum number of units that may be billed by a provider for the same beneficiary on the same day. Providers may not exceed the local limits by billing additional units on multiple claim lines, unless they include modifiers to explain why the additional units are medically necessary. The local edits were developed for services that may be overused or abused in their jurisdiction, including services for which the MUE limits were being frequently exceeded. Without these local edits, the MUE limits would be exceeded much more frequently. The local limits were developed on the basis of clinical input from the MAC’s medical directors and other clinicians, as well as analysis of claims data. The vast majority of Medicare payments in 2011 for services with unpublished MUEs were for services where the numbers of units were at or below the per-day MUE limits. However, because the MUE limits were not implemented as per-day limits, approximately $14 million was paid for services that exceeded MUE limits. Moreover, by applying on a national basis the more restrictive local limits used by some contractors, (which are implemented as per-day limits), we found that CMS could have lowered payments by an additional $7.8 million. We also found that payments exceeding unpublished MUE limits were concentrated within certain services and states. In 2011, Medicare paid approximately $23.9 billion for 1,845 types of services with unpublished MUEs. The vast majority—about 99.9 percent—was paid for services where the number of units providers billed was at or below the per-day MUE limits. The MUE contractor indicated that the limits were generally set high, so that the MUEs would not deny claims for medically necessary services. However, because MUEs were not implemented as per-day limits, approximately $14 million was paid for services where total units billed by a provider for a beneficiary on the same day exceeded the MUE limits. These payments were made for units of services exceeding MUE limits that were billed on multiple lines of a claim or across multiple claims. Although the automated claims-processing systems check each claim line, they do not check all units billed by a provider for a beneficiary on the same day to see if they exceed the limit. While providers may use modifiers on claim lines to indicate when it is medically necessary to exceed the MUE limits, no modifiers were included for the approximately $14 million in payments that we identified as exceeding the unpublished MUE limits to explain why the additional units were medically appropriate. CMS does not expect its contractors to check claims to determine if modifiers are included when billing additional units of services related to unpublished MUEs on multiple lines. CMS officials stated that because the MUEs are unpublished, providers may not know a given service has an MUE and therefore may not include a modifier when billing for services. See GAO-13-102. some MUEs that are likely to become DOS edits include those where it is anatomically impossible to exceed the MUE limit. For example, anatomical limits such as having only two eyes limits the number of times a given procedure could be performed for which a provider could submit a claim on the same day for the same patient. CMS officials told us that they probably will not apply this policy to some of the unpublished MUEs where clear anatomical or other restrictions may not exist, such as those for some Part B drugs and DME. Contractors that we interviewed were aware of the new policy, had seen the draft version, and were generally supportive of the effort. Our examination of 13 services where MACs developed more restrictive local edits than the unpublished MUEs showed Medicare payments could have been reduced had CMS examined these edits and adopted them as part of its program integrity responsibilities. If CMS had used these limits and implemented them as per-day edits, instead of using the unpublished MUE limits on these services, Medicare payments would have been lowered by an additional $7.8 million. This indicates that there is a potential for additional savings if some of these local edits were applied nationally. Four of the MACs from whom we requested local edits had implemented edits related to unpublished MUEs. At least three of the contractors had more restrictive limits for the 13 services we analyzed. Contractors told us that they had developed more restrictive edits because the MUE limits were being exceeded frequently or they had observed potentially fraudulent or abusive billing for these services. While the unpublished MUE limits were implemented at a claim-line level, contractors told us that their local limits were implemented as per-day limits. Contractors also told us that CMS does not request information on their local edits, nor do they routinely share them with CMS. The MUE contractor told us that the MUE Workgroup was aware of one contractor’s local edits for certain services with unpublished MUEs, but was not aware of other contractors’ local edits for these services. Because CMS has not communicated with its contractors regarding their local edits or monitored their use, it is not evaluating these local edits. As a result, it may be missing an opportunity to identify situations in which savings could be achieved by implementing some of the local edits nationally. Payments for services that exceeded the per-day MUE limits were concentrated within certain services. For example, of the over 1,800 services with unpublished MUEs, 717 had payments that exceeded the MUE limits. Of these, 20 services accounted for almost half of all payments that exceeded the MUE limits, with the top service alone accounting for over 8 percent of such payments. Many of these top 20 services were for prescription drugs, DME, and clinical laboratory services. Payments for services that exceeded the unpublished MUE limits also tended to be concentrated in certain states. Five states with the highest payments that exceeded the MUE limits (Arkansas, California, New York, Pennsylvania, and Texas) accounted for almost half of these payments, although they accounted for 30 percent of total payments for all services with unpublished MUEs. CMS and its contractors do not have a system in place for examining claims to determine the extent to which providers may be exceeding unpublished MUE limits and whether payments for such services were proper. Payments that exceeded MUE limits were concentrated among certain providers, which could facilitate such examination. CMS officials and contractors that we interviewed said they do not have a system in place for regularly examining claims related to services with unpublished MUEs from providers that most often exceeded MUE limits.While CMS has several strategies to reduce improper payments, and it reviews aberrant billing patterns at a provider level, that is, across all services billed by the provider, officials told us that they have no plans to review services specifically related to MUEs. Similarly, contractors told us that they do not examine claims specifically related to MUEs, although they do review claims to detect other aberrant billing patterns and identify emerging new vulnerabilities. For example, one contractor told us it evaluates weekly billing reports to examine whether its medical review strategies are appropriate and focused on problem areas. It also reviews data from multiple other sources including reports from the Office of the Inspector General and those we have issued, and findings from the RAs. However, the contractor’s reviews are conducted at a provider level, that is, across all services billed by the provider but not specifically for services with unpublished MUE limits. As a result, providers may be unlikely to have their billing reviewed more closely if they frequently bill above unpublished MUE limits, but do not have other aberrant billing patterns. We provided a list of 10 providers with payments of at least $3,000 that exceeded the unpublished MUE limits in each contractor’s jurisdiction to the contractors we interviewed to determine if they were scrutinizing these providers’ billing patterns. One contractor told us that it was reviewing claims submitted by 1 of the 10 providers that was included on the list we had forwarded to them. The contractor had received a potential fraud referral on this provider, although not specifically related to billings for services with unpublished MUEs. However, the remaining contractors were not reviewing any of the providers we identified. We found that a small number of providers accounted for a large share of payments for services that exceeded the unpublished MUE limits. For example, 419 providers received at least $5,000 for services that exceeded the unpublished MUEs in 2011. Of these, the 100 providers with the highest payments that exceeded the MUE limits accounted for nearly 44 percent of total excess payments, although they accounted for only about 1 percent of total payments for all services with unpublished MUEs. In addition, the provider with the highest payments that exceeded the unpublished MUE limits alone accounted for about 4 percent of these payments, although this provider accounted for less than 0.1 percent of total payments for all services with unpublished MUEs. Certain provider types were more likely to have payments that exceeded the MUE limits. About 26 percent of the top 100 providers exceeding unpublished MUE limits included clinical laboratories and DME providers. Researchers have noted that there is potential for fraud and abuse with some laboratory services that can be self-referred, such as certain pathology tests. For example, a pathologist examining a surgical pathology specimen may self-refer by ordering and performing additional tests on the pathology specimen without seeking the consent of the original ordering physician. Some contractors we interviewed told us that certain DME items, such as diabetic testing supplies, are prone to potentially fraudulent billing. CMS has also estimated improper DME billing of 66 percent in fiscal year 2012—higher than for any other service measured. Developing more cost-effective strategies for ensuring the appropriateness of Medicare payments could help ensure the long-term sustainability of the program. Although almost all payments for services with unpublished MUEs were made for services at or below the MUE limits, we found that there are still opportunities to realize savings. When analyzed on a per-day basis, payments that potentially should not have been made for services that exceeded the unpublished MUE limits totaled approximately $14 million. In November 2012, we recommended that CMS implement MUEs that assess all quantities of services provided to the same beneficiary by the same provider on the same day—in other words, as per-day limits—but allow the limits to be exceeded if the provider included modifiers to explain the medical necessity of exceeding the limits. The MUE contractor recently announced that CMS began implementing our recommendation for certain services as of April 1, 2013. However, CMS officials told us these are unlikely to be applied to some of the services with unpublished MUEs, such as Part B Drugs and DME services. We continue to believe that our recommendation should be implemented for all MUEs to help strengthen the financial health of the program. Continuously seeking new methods for improving oversight of provider payments is another important way to strengthen program integrity. Contractors’ local edits could serve as a resource for CMS to use in developing or revising MUEs and in reducing payments for services that are potentially improperly billed. Unpublished MUEs were developed for services and items that have been fraudulently or abusively billed in the past. Therefore, systematically examining billing information and claims from providers that exceed these limits and do not use modifiers to indicate the excess units are medically appropriate, could help identify improper payments and could inform CMS’s program integrity efforts. To improve the effectiveness of the unpublished MUEs and better ensure Medicare program integrity, we recommend that the CMS Administrator take the following two actions: examine contractor local edits related to unpublished MUEs to determine whether any of the national unpublished MUE limits should be revised; and consider periodically reviewing claims to identify the providers exceeding the unpublished MUE limits and determine whether their billing was proper. We provided a draft of this report to HHS for comment and received written comments, which are reprinted in appendix I. In its written comments, HHS concurred with both our recommendations. For the recommendation to examine contractor local edits related to unpublished MUEs, HHS concurred and indicated that CMS would review making revisions to the MUEs in order to ensure that the edit levels are appropriate on the basis of input from national health care organizations, providers, Medicare Administrative Contractors, and CMS personnel, as well as data analysis. For the second recommendation, HHS concurred and indicated that CMS would conduct further analysis to determine the appropriate actions, if necessary. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Sheila K. Avruch, Assistant Director; Iola D’Souza; Eagan Kemp; Richard Lipinski; and Laurie Pachter made key contributions to this report. | CMS has estimated improper Medicare fee-for-service payments of $29.6 billion in fiscal year 2012. To help prevent improper payments, CMS has implemented national MUEs, which limit the amount of a service that is paid when billed by a provider for a beneficiary on the same day. The limits for certain services that have been fraudulently or abusively billed are unpublished to deter providers from billing up to the maximum allowable limit. GAO was asked to review issues related to MUEs. This report examines the extent to which CMS has (1) paid for services that exceeded the unpublished MUE limits and (2) examined billing from providers that exceeded unpublished MUE limits. GAO analyzed Medicare claims related to these limits in 2011, and interviewed CMS officials and selected contractors in states with high improper payments. Less than 0.1 percent of payments Medicare made in 2011 were for amounts of services that exceeded certain unpublished limits for excess billing and where the claims did not include information from the providers to indicate why the additional services were medically necessary. These limits are set by the Centers for Medicare & Medicaid Services (CMS)--an agency within the Department of Health and Human Services (HHS)--as a means to avoid potentially improper payments. To implement these limits, CMS established automated controls in its payment systems called Medically Unlikely Edits (MUE). These MUEs compare the number of certain services billed against limits for the amount of services likely to be provided under normal medical practice to a beneficiary by the same provider on the same day--for example, no more than one of the same operation on each eye. GAO analysis of 2011 claims data found approximately $14 million out of a total of $23.9 billion in Medicare payments for services that exceeded unpublished MUE limits and where the claims did not include information from the providers to indicate why the additional services were medically necessary. As GAO has previously reported, claims could exceed the limits because the MUEs are not set up as per-day limits that assess all services billed by a provider for a single beneficiary on the same day. CMS plans to begin implementing MUEs for some services as per-day limits for services where it would be impossible to exceed the limits for anatomical or other reasons. Medicare contractors that pay claims may develop local edits, which can set more restrictive limits for some services than the national unpublished MUE limits. GAO's analysis of claims data applying a few of these more restrictive local limits showed that by applying them instead of the relevant national MUE limits, CMS could have lowered payments by an additional $7.8 million. However, CMS is not evaluating these local edits to determine if these lower limits might be more appropriate. To the extent that these and other local edits are not evaluated more systematically, CMS may be missing an opportunity to achieve savings by revising some national MUEs to correspond with more restrictive local limits. CMS and its contractors did not have a system in place for examining claims to determine the extent to which providers may be exceeding unpublished MUE limits and whether payments for such services were proper. CMS officials and contractors told us that they examine aberrant billing patterns at a provider level, that is, across all services billed by the provider, but not specifically for services with unpublished MUE limits. GAO found that payments that exceeded MUE limits were concentrated among certain providers and types of specialties, in certain states, and for certain services. For example, the top 100 providers with payments that exceeded the MUE limits accounted for nearly 44 percent of total payments that exceeded the MUE limits, although they accounted for only about 1 percent of total payments for all services with unpublished MUEs. Moreover, about 26 percent of the top 100 providers included clinical laboratories and durable medical equipment providers, both of which have been identified in the past as having high potential for fraudulent billings. Because unpublished MUEs were developed for services and items that have been fraudulently or abusively billed in the past, without systematically examining billing information and claims from the top providers exceeding those limits CMS may be missing another opportunity to improve its program integrity efforts. GAO recommends that CMS examine contractor edits to determine if any national unpublished MUE limits should be revised; and consider reviewing claims to identify providers that exceed the unpublished MUE limits, and determine whether their billing was proper. In its written comments, HHS concurred with both our recommendations. |
The tens of thousands of individuals who responded to the September 11, 2001, attack on the WTC experienced the emotional trauma of the disaster and were exposed to a noxious mixture of dust, debris, smoke, and potentially toxic contaminants, such as pulverized concrete, fibrous glass, particulate matter, and asbestos. A wide variety of health effects have been experienced by responders to the WTC attack, and several federally funded programs have been created to address the health needs of these individuals. Numerous studies have documented the physical and mental health effects of the WTC attacks. Physical health effects included injuries and respiratory conditions, such as sinusitis, asthma, and a new syndrome called WTC cough, which consists of persistent coughing accompanied by severe respiratory symptoms. Almost all firefighters who responded to the attack experienced respiratory effects, including WTC cough. One study suggested that exposed firefighters on average experienced a decline in lung function equivalent to that which would be produced by 12 years of aging. A recently published study found a significantly higher risk of newly diagnosed asthma among responders that was associated with increased exposure to the WTC disaster site. Commonly reported mental health effects among responders and other affected individuals included symptoms associated with post-traumatic stress disorder (PTSD), depression, and anxiety. Behavioral health effects such as alcohol and tobacco use have also been reported. Some health effects experienced by responders have persisted or worsened over time, leading many responders to begin seeking treatment years after September 11, 2001. Clinicians involved in screening, monitoring, and treating responders have found that many responders’ conditions—both physical and psychological—have not resolved and have developed into chronic disorders that require long-term monitoring. For example, findings from a study conducted by clinicians at the NY/NJ WTC Consortium show that at the time of examination, up to 2.5 years after the start of the rescue and recovery effort, 59 percent of responders enrolled in the program were still experiencing new or worsened respiratory symptoms. Experts studying the mental health of responders found that about 2 years after the WTC attack, responders had higher rates of PTSD and other psychological conditions compared to others in similar jobs who were not WTC responders and others in the general population. Clinicians also anticipate that other health effects, such as immunological disorders and cancers, may emerge over time. There are six key programs that currently receive federal funding to provide voluntary health screening, monitoring, or treatment at no cost to responders. The six WTC health programs, shown in table 1, are (1) the FDNY WTC Medical Monitoring and Treatment Program; (2) the NY/NJ WTC Consortium, which comprises five clinical centers in the NY/NJ area; (3) the WTC Federal Responder Screening Program; (4) the WTC Health Registry; (5) Project COPE; and (6) the Police Organization Providing Peer Assistance (POPPA) program. The programs vary in aspects such as the HHS administering agency or component responsible for administering the funding; the implementing agency, component, or organization responsible for providing program services; eligibility requirements; and services. The WTC health programs that are providing screening and monitoring are tracking thousands of individuals who were affected by the WTC disaster. As of June 2007, the FDNY WTC program had screened about 14,500 responders and had conducted follow-up examinations for about 13,500 of these responders, while the NY/NJ WTC Consortium had screened about 20,000 responders and had conducted follow-up examinations for about 8,000 of these responders. Some of the responders include nonfederal responders residing outside the NYC metropolitan area. As of June 2007, the WTC Federal Responder Screening Program had screened 1,305 federal responders and referred 281 responders for employee assistance program services or specialty diagnostic services. In addition, the WTC Health Registry, a monitoring program that consists of periodic surveys of self-reported health status and related studies but does not provide in- person screening or monitoring, collected baseline health data from over 71,000 people who enrolled in the Registry. In the winter of 2006, the Registry began its first adult follow-up survey, and as of June 2007 over 36,000 individuals had completed the follow-up survey. In addition to providing medical examinations, FDNY’s WTC program and the NY/NJ WTC Consortium have collected information for use in scientific research to better understand the health effects of the WTC attack and other disasters. The WTC Health Registry is also collecting information to assess the long-term public health consequences of the disaster. Beginning in October 2001 and continuing through 2003, FDNY’s WTC program, the NY/NJ WTC Consortium, the WTC Federal Responder Screening Program, and the WTC Health Registry received federal funding to provide services to responders. This funding primarily came from appropriations to the Department of Homeland Security’s Federal Emergency Management Agency (FEMA), as part of the approximately $8.8 billion that the Congress appropriated to FEMA for response and recovery activities after the WTC disaster. FEMA entered into interagency agreements with HHS agencies to distribute the funding to the programs. For example, FEMA entered into an agreement with NIOSH to distribute $90 million appropriated in 2003 that was available for monitoring. FEMA also entered into an agreement with ASPR for ASPR to administer the WTC Federal Responder Screening Program. A $75 million appropriation to CDC in fiscal year 2006 for purposes related to the WTC attack resulted in additional funding for the monitoring activities of the FDNY WTC program, NY/NJ WTC Consortium, and the Registry. The $75 million appropriation to CDC in fiscal year 2006 also provided funds that were awarded to the FDNY WTC program, the NY/NJ WTC Consortium, Project COPE, and the POPPA program for treatment services for responders. An emergency supplemental appropriation to CDC in May 2007 included an additional $50 million to carry out the same activities provided for in the $75 million appropriation made in fiscal year 2006. The President’s proposed fiscal year 2008 budget for HHS includes $25 million for treatment of WTC-related illnesses for responders. In February 2006, the Secretary of HHS designated the Director of NIOSH to take the lead in ensuring that the WTC health programs are well coordinated, and in September 2006 the Secretary established a WTC Task Force to advise him on federal policies and funding issues related to responders’ health conditions. The chair of the task force is HHS’s Assistant Secretary for Health, and the vice chair is the Director of NIOSH. The task force reported to the Secretary of HHS in early April 2007. HHS’s WTC Federal Responder Screening Program has had difficulties ensuring the uninterrupted availability of services for federal responders. First, the provision of screening examinations has been intermittent. (See fig. 1.) After resuming screening examinations in December 2005 and conducting them for about a year, HHS again placed the program on hold and suspended scheduling of screening examinations for responders from January 2007 to May 2007. This interruption in service occurred because there was a change in the administration of the WTC Federal Responder Screening Program, and certain interagency agreements were not established in time to keep the program fully operational. In late December 2006, ASPR and NIOSH signed an interagency agreement giving NIOSH $2.1 million to administer the WTC Federal Responder Screening Program. Subsequently, NIOSH and FOH needed to sign a new interagency agreement to allow FOH to continue to be reimbursed for providing screening examinations. It took several months for the agreement between NIOSH and FOH to be negotiated and approved, and scheduling of screening examinations did not resume until May 2007. Second, the program’s provision of specialty diagnostic services has also been intermittent. After initial screening examinations, responders often need further diagnostic services by ear, nose, and throat doctors; cardiologists; and pulmonologists; and FOH had been referring responders to these specialists and paying for the services. However, the program stopped scheduling and paying for these specialty diagnostic services in April 2006 because the program’s contract with a new provider network did not cover these services. In March 2007, FOH modified its contract with the provider network and resumed scheduling and paying for specialty diagnostic services for federal responders. In July 2007 we reported that NIOSH was considering expanding the WTC Federal Responder Screening Program to include monitoring examinations—follow-up physical and mental health examinations—and was assessing options for funding and delivering these services. If federal responders do not receive this type of monitoring, health conditions that arise later may not be diagnosed and treated, and knowledge of the health effects of the WTC disaster may be incomplete. In February 2007, NIOSH sent a letter to FEMA, which provides the funding for the program, asking whether the funding could be used to support monitoring in addition to the onetime screening currently offered. A NIOSH official told us that as of August 2007 the agency had not received a response from FEMA. NIOSH officials told us that if FEMA did not agree to pay for monitoring of federal responders, NIOSH would consider using other funding. According to a NIOSH official, if FEMA or NIOSH agrees to pay for monitoring of federal responders, this service would be provided by FOH or one of the other WTC health programs. NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area, although it recently took steps toward expanding the availability of these services. Initially, NIOSH made two efforts to provide screening and monitoring services for these responders, the exact number of which is unknown. The first effort began in late 2002 when NIOSH awarded a contract for about $306,000 to the Mount Sinai School of Medicine to provide screening services for nonfederal responders residing outside the NYC metropolitan area and directed it to establish a subcontract with AOEC. AOEC then subcontracted with 32 of its member clinics across the country to provide screening services. From February 2003 to July 2004, the 32 AOEC member clinics screened 588 nonfederal responders nationwide. AOEC experienced challenges in providing these screening services. For example, many nonfederal responders did not enroll in the program because they did not live near an AOEC clinic, and the administration of the program required substantial coordination among AOEC, AOEC member clinics, and Mount Sinai. Mount Sinai’s subcontract with AOEC ended in July 2004, and from August 2004 until June 2005 NIOSH did not fund any organization to provide services to nonfederal responders outside the NYC metropolitan area. During this period, NIOSH focused on providing screening and monitoring services for nonfederal responders in the NYC metropolitan area. In June 2005, NIOSH began its second effort by awarding $776,000 to the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide both screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. In June 2006, NIOSH awarded an additional $788,000 to DCC to provide screening and monitoring services for these responders. NIOSH officials told us that they assigned DCC the task of providing screening and monitoring services to nonfederal responders outside the NYC metropolitan area because the task was consistent with DCC’s responsibilities for the NY/NJ WTC Consortium, which include data monitoring and coordination. DCC, however, had difficulty establishing a network of providers that could serve nonfederal responders residing throughout the country—ultimately contracting with only 10 clinics in seven states to provide screening and monitoring services. DCC officials said that as of June 2007 the 10 clinics were monitoring 180 responders. In early 2006, NIOSH began exploring how to establish a national program that would expand the network of providers to provide screening and monitoring services, as well as treatment services, for nonfederal responders residing outside the NYC metropolitan area. According to NIOSH, there have been several challenges involved in expanding a network of providers to screen and monitor nonfederal responders nationwide. These include establishing contracts with clinics that have the occupational health expertise to provide services nationwide, establishing patient data transfer systems that comply with applicable privacy laws, navigating the institutional review board process for a large provider network, and establishing payment systems with clinics participating in a national network of providers. On March 15, 2007, NIOSH issued a formal request for information from organizations that have an interest in and the capability of developing a national program for responders residing outside the NYC metropolitan area. In this request, NIOSH described the scope of a national program as offering screening, monitoring, and treatment services to about 3,000 nonfederal responders through a national network of occupational health facilities. NIOSH also specified that the program’s facilities should be located within reasonable driving distance to responders and that participating facilities must provide copies of examination records to DCC. In May 2007, NIOSH approved a request from DCC to redirect about $125,000 from the June 2006 award to establish a contract with a company to provide screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. Subsequently, DCC contracted with QTC Management, Inc., one of the four organizations that had responded to NIOSH’s request for information. DCC’s contract with QTC does not include treatment services, and NIOSH officials are still exploring how to provide and pay for treatment services for nonfederal responders residing outside the NYC metropolitan area. QTC has a network of providers in all 50 states and the District of Columbia and can use internal medicine and occupational medicine doctors in its network to provide these services. In addition, DCC and QTC have agreed that QTC will identify and subcontract with providers outside of its network to screen and monitor nonfederal responders who do not reside within 25 miles of a QTC provider. In June 2007, NIOSH awarded $800,600 to DCC for coordinating the provision of screening and monitoring examinations, and QTC will receive a portion of this award from DCC to provide about 1,000 screening and monitoring examinations through May 2008. According to a NIOSH official, QTC’s providers have begun conducting screening examinations, and by the end of August 2007, 18 nonfederal responders had completed screening examinations, and 33 others had been scheduled. In fall 2006, NIOSH awarded and set aside funds totaling $51 million from its $75 million appropriation for four WTC health programs in the NYC metropolitan area to provide treatment services to responders enrolled in these programs. Of the $51 million, NIOSH awarded about $44 million for outpatient services to the FDNY WTC program, the NY/NJ WTC Consortium, Project COPE, and the POPPA program. NIOSH made the largest awards to the two programs from which almost all responders receive medical services, the FDNY WTC program and NY/NJ WTC Consortium (see table 2). In July 2007 we reported that officials from the FDNY WTC program and the NY/NJ WTC Consortium expected that their awards for outpatient treatment would be spent by the end of fiscal year 2007. In addition to the $44 million it awarded for outpatient services, NIOSH set aside about $7 million for the FDNY WTC program and NY/NJ WTC Consortium to pay for responders’ WTC-related inpatient hospital care as needed. The FDNY WTC program and NY/NJ WTC Consortium used their awards from NIOSH to continue providing treatment services to responders and to expand the scope of available treatment services. Before NIOSH made its awards for treatment services, the treatment services provided by the two programs were supported by funding from private philanthropies and other organizations. According to officials of the NY/NJ WTC Consortium, this funding was sufficient to provide only outpatient care and partial coverage for prescription medications. The two programs used NIOSH’s awards to continue to provide outpatient services to responders, such as treatment for gastrointestinal reflux disease, upper and lower respiratory disorders, and mental health conditions. They also expanded the scope of their programs by offering responders full coverage for their prescription medications for the first time. A NIOSH official told us that some of the commonly experienced WTC conditions, such as upper airway conditions, gastrointestinal disorders, and mental health disorders, are frequently treated with medications that can be costly and may be prescribed for an extended period of time. According to an FDNY WTC program official, prescription medications are now the largest component of the program’s treatment budget. The FDNY WTC program and NY/NJ Consortium also expanded the scope of their programs by paying for inpatient hospital care for the first time, using funds from the $7 million that NIOSH had set aside for this purpose. According to a NIOSH official, NIOSH pays for hospitalizations that have been approved by the medical directors of the FDNY WTC program and NY/NJ WTC Consortium through awards to the programs from the funds NIOSH set aside for this purpose. By August 31, 2007, federal funds had been used to support 34 hospitalizations of responders, 28 of which were referred by the NY/NJ WTC Consortium’s Mount Sinai clinic, 5 by the FDNY WTC program, and 1 by the NY/NJ WTC Consortium’s CUNY Queens College program. Responders have received inpatient hospital care to treat, for example, asthma, pulmonary fibrosis, and severe cases of depression or PTSD. According to a NIOSH official, one responder is now a candidate for lung transplantation and if this procedure is performed, it will be covered by federal funds. If funds set aside for hospital care are not completely used by the end of fiscal year 2007, he said they could be carried over into fiscal year 2008 for this purpose or used for outpatient services. After receiving NIOSH’s funding for treatment services in fall 2006, the NY/NJ WTC Consortium ended its efforts to obtain reimbursement from health insurance held by responders with coverage. Consortium officials told us that efforts to bill insurance companies involved a heavy administrative burden and were frequently unsuccessful, in part because the insurance carriers typically denied coverage for work-related health conditions on the grounds that such conditions should be covered by state workers’ compensation programs. However, according to officials from the NY/NJ WTC Consortium, responders trying to obtain workers’ compensation coverage routinely experienced administrative hurdles and significant delays, some lasting several years. Moreover, according to these program officials, the majority of responders enrolled in the program either had limited or no health insurance coverage. According to a labor official, responders who carried out cleanup services after the WTC attack often did not have health insurance, and responders who were construction workers often lost their health insurance when they became too ill to work the number of days each quarter or year required to maintain eligibility for insurance coverage. According to a NIOSH official, although the agency had not received authorization as of August 30, 2007, to use the $50 million emergency supplemental appropriation made to CDC in May 2007, NIOSH was formulating plans for use of these funds to support the WTC treatment programs in fiscal year 2008. Screening and monitoring the health of the people who responded to the September 11, 2001, attack on the World Trade Center are critical for identifying health effects already experienced by responders or those that may emerge in the future. In addition, collecting and analyzing information produced by screening and monitoring responders can give health care providers information that could help them better diagnose and treat responders and others who experience similar health effects. While some groups of responders are eligible for screening and follow-up physical and mental health examinations through the federally funded WTC health programs, other groups of responders are not eligible for comparable services or may not always find these services available. Federal responders have been eligible only for the initial screening examination provided through the WTC Federal Responder Screening Program. NIOSH, the administrator of the program, has been considering expanding the program to include monitoring but has not done so. In addition, many responders who reside outside the NYC metropolitan area have not been able to obtain screening and monitoring services because available services are too distant. Moreover, HHS has repeatedly interrupted the programs it established for federal responders and nonfederal responders outside of NYC, resulting in periods when no services were available to them. HHS continues to fund and coordinate the WTC health programs and has key federal responsibility for ensuring the availability of services to responders. HHS and its agencies have recently taken steps to move toward providing screening and monitoring services to federal responders and to nonfederal responders living outside of the NYC area. However, these efforts are not complete, and the stop-and-start history of the department’s efforts to serve these groups does not provide assurance that the latest efforts to extend screening and monitoring services to these responders will be successful and will be sustained over time. Therefore we recommended in July 2007 that the Secretary of HHS take expeditious action to ensure that health screening and monitoring services are available to all people who responded to the attack on the WTC, regardless of who their employer was or where they reside. As of early September 2007 the department has not responded to this recommendation. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact Cynthia A. Bascetta at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Helene F. Toiv, Assistant Director; Hernan Bozzolo; Frederick Caison; Anne Dievler; and Roseanne Price made key contributions to this statement. September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders. GAO-07-892. Washington, D.C.: July 23, 2007. September 11: HHS Has Screened Additional Federal Responders for World Trade Center Health Effects, but Plans for Awarding Funds for Treatment Are Incomplete. GAO-06-1092T. Washington, D.C.: September 8, 2006. September 11: Monitoring of World Trade Center Health Effects Has Progressed, but Program for Federal Responders Lags Behind. GAO-06-481T. Washington, D.C.: February 28, 2006. September 11: Monitoring of World Trade Center Health Effects Has Progressed, but Not for Federal Responders. GAO-05-1020T. Washington, D.C.: September 10, 2005. September 11: Health Effects in the Aftermath of the World Trade Center Attack. GAO-04-1068T. Washington, D.C.: September 8, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Six years after the attack on the World Trade Center (WTC), concerns persist about health effects experienced by WTC responders and the availability of health care services for those affected. Several federally funded programs provide screening, monitoring, or treatment services to responders. GAO has previously reported on the progress made and implementation problems faced by these WTC health programs. This testimony is based on and updates GAO's report, September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders ( GAO-07-892 , July 23, 2007). In this testimony, GAO discusses the status of (1) services provided by the Department of Health and Human Services' (HHS) WTC Federal Responder Screening Program, (2) efforts by the Centers for Disease Control and Prevention's National Institute for Occupational Safety and Health (NIOSH) to provide services for nonfederal responders residing outside the New York City (NYC) area, and (3) NIOSH's awards to WTC health program grantees for treatment services. In July 2007, following a re-examination of the status of the WTC health programs, GAO recommended that the Secretary of HHS take expeditious action to ensure that health screening and monitoring services are available to all people who responded to the WTC attack, regardless of who their employer was or where they reside. As of early September 2007 the department has not responded to this recommendation. As GAO reported in July 2007, HHS's WTC Federal Responder Screening Program has had difficulties ensuring the uninterrupted availability of screening services for federal responders. From January 2007 to May 2007, the program stopped scheduling screening examinations because there was a change in the program's administration and certain interagency agreements were not established in time to keep the program fully operational. From April 2006 to March 2007, the program stopped scheduling and paying for specialty diagnostic services associated with screening. NIOSH, the administrator of the program, has been considering expanding the program to include monitoring, that is, follow-up physical and mental health examinations, but has not done so. If federal responders do not receive monitoring, health conditions that arise later may not be diagnosed and treated, and knowledge of the health effects of the WTC disaster may be incomplete. NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC area, although it recently took steps toward expanding the availability of these services. In late 2002, NIOSH arranged for a network of occupational health clinics to provide screening services. This effort ended in July 2004, and until June 2005 NIOSH did not fund screening or monitoring services for nonfederal responders outside the NYC area. In June 2005, NIOSH funded the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide screening and monitoring services; however, DCC had difficulty establishing a nationwide network of providers and contracted with only 10 clinics in seven states. In 2006, NIOSH began to explore other options for providing these services, and in May 2007 it took steps toward expanding the provider network. NIOSH has awarded treatment funds to four WTC health programs in the NYC area. In fall 2006, NIOSH awarded $44 million for outpatient treatment and set aside $7 million for hospital care. The New York/New Jersey WTC Consortium and the New York City Fire Department WTC program, which received the largest awards, used NIOSH's funding to continue outpatient services, offer full coverage for prescriptions, and cover hospital care. |
Some of the statutory rulemaking requirements that Congress has enacted over the years apply to all agencies, but some of the requirements are applicable only to certain agencies. Some of these requirements have been in place for more than 50 years, but most have been implemented within the past 20 years or so. The most long-standing and broadly applicable federal rulemaking requirements are in the Administrative Procedure Act (APA) of 1946. The APA provides for both formal and informal rulemaking. Formal rulemaking is used in ratemaking proceedings and in certain other cases when rules are required by statute to be made “on the record” after an opportunity for a trial-type agency hearing. Informal or “notice and comment” rulemaking is used much more frequently, and is the focus of my comments here today. In informal rulemaking, the APA generally requires that agencies publish a notice of proposed rulemaking (NPRM) in the Federal Register. The notice must contain (1) a statement of the time, place, and nature of public rulemaking proceedings; (2) reference to the legal authority under which the rule is proposed; and (3) either the terms or substance of the proposed rule or a description of the subjects and issues involved. “Interested persons” must then be given an opportunity to comment on the proposed rule. The APA does not specify the length of this comment period, but agencies commonly allow at least 30 days. After considering the public comments, the agency may then publish the final rule in the Federal Register. According to the APA, a final rule cannot become effective until at least 30 days after its publication unless (1) the rule grants or recognizes an exemption or relieves a restriction, (2) the rule is an interpretative rule or statement of policy, or (3) the agency determines that the rule should take effect sooner for good cause and publishes that determination with the rule. The APA also states that the notice and comment procedures generally do not apply when an agency finds, for “good cause,” that those procedures are “impracticable, unnecessary, or contrary to the public interest.” When agencies use the good cause exception, the act requires that they explicitly say so and provide a rationale for the exception’s use when the rule is published in the Federal Register. Two procedures for noncontroversial and expedited rulemaking actions have been developed that are essentially applications of the good cause exception. “Direct final” rulemaking involves agency publication of a rule in the Federal Register with a statement that the rule will be effective on a particular date unless an adverse comment is received within a specified period of time (e.g., 30 days). If an adverse comment is filed, the direct final rule is withdrawn and the agency may publish the rule as a proposed rule. In “interim final” rulemaking, the agency issues a final rule without an NPRM that is generally effective immediately, but with a post-promulgation opportunity for the public to comment. If the public comments persuade the agency that changes are needed in the interim final rule, the agency may revise the rule by publishing a final rule reflecting those changes. In August 1998, we reported that about half of the 4,658 final regulatory actions published in the Federal Register during 1997 were issued without NPRMs. Although most of the final actions without NPRMs appeared to involve administrative or technical issues with limited applicability, some were significant actions, and 11 were “economically significant” (e.g., had at least a $100 million impact on the economy). Some of the explanations that the agencies offered in the preambles to their rules for using the good cause exception were not clear. For example, in several cases, the preambles said that an NPRM was “impracticable” because of statutory or other deadlines that had already passed by the time the rules were issued. In other cases, the agencies asserted in the preambles that notice and comment would delay rules that were, in some general way, in the “public interest.” For example, in one such case, the agency said it was using the good cause exception because the rule would “facilitate tourist and business travel to and from Slovenia,” and therefore delaying the rule to allow for public comments “would be contrary to the public interest.” In another case, the agency said that soliciting public comments on the rule was “contrary to the public interest” because the rule authorized a “new and creative method of financing the development of public housing.” The APA recognizes that NPRMS are not always practical, necessary, or in the public interest. However, when agencies publish final rules without NPRMs, the public’s ability to participate in the rulemaking process is limited. Also, several of the regulatory reform requirements that Congress has enacted during the past 20 years use as their trigger the publication of an NPRM. Therefore, it is important that agencies clearly explain why notice and comment procedures are not followed. We recommended in our report that OMB notify executive departments and agencies that (1) their explanations in the preambles to their rules should clearly explain why notice and comment was impracticable, unnecessary, or not in the public interest, and (2) OMB would, as part of its review of significant final rules, focus on those explanations. Another statutory requirement that is applicable to both independent and non-independent regulatory agencies is the Paperwork Reduction Act (PRA), which was originally enacted in 1980 but was amended and recodified in 1995. The original PRA established the Office of Information and Regulatory Affairs (OIRA) within OMB to provide central agency leadership and oversight of governmentwide efforts to reduce unnecessary paperwork and improve the management of information resources. Under the act, agencies must receive OIRA approval for each information collection request before it is implemented. The act generally defines a “collection of information” as the obtaining or disclosure of facts or opinions by or for an agency by 10 or more non-federal persons. Many information collections, recordkeeping requirements, and third-party disclosures are contained in or are authorized by regulations as monitoring or enforcement tools, while others appear in separate written questionnaires. Under the PRA, agencies must generally provide the public with an opportunity to comment on a proposed information collection by publishing a 60-day notice in the Federal Register. For each proposed collection of information submitted to OIRA, the responsible agency must certify and provide a record of support that the collection, among other things, is necessary for the proper performance of the functions of the agency, is not unnecessarily duplicative of other information, reduces burden on the public to the extent practicable and appropriate, and is written in plain and unambiguous terminology. The agency must also publish a notice in the Federal Register stating that the agency has submitted the proposed collection to OIRA and setting forth, among other things, (1) a description of the need and proposed use of the information, (2) a description of the likely respondents and their proposed frequency of response, and (3) an estimate of the resultant burden. For any proposed information collection that is not contained in a proposed rule, OIRA must complete its review of an agency information collection request within 60 days of the date that the proposed collection is submitted. OIRA approvals can be for up to 3 years, but can be renewed by resubmitting their information collection requests to OIRA. Agency information collections that have not been approved by OIRA or for which approvals have expired are considered violations of the PRA, and those individuals and organizations subject to these collections’ requirements cannot be penalized for failing to provide the information requested. The PRA also requires OIRA to set governmentwide and agency-specific burden reduction goals. The act envisioned a 35-percent reduction in governmentwide paperwork burden by the end of fiscal year 2000. However, earlier this year we testified that governmentwide paperwork burden has gone up, not down, since 1995. Federal agencies often indicate that they cannot reduce their paperwork burden because of existing and new statutory requirements that they collect more information. Nevertheless, some agencies do appear to be making progress. For example, the Department of Labor’s paperwork estimate dropped from more than 266 million burden hours at the end of fiscal year 1995 to about 182 million burden hours at the end of fiscal year 2000—a 32 percent decrease. The Regulatory Flexibility Act (RFA), enacted in 1980 in response to concerns about the effect that federal regulations can have on small entities, is another example of a broadly-based rulemaking requirement. Under the RFA, independent and non-independent regulatory agencies must prepare an initial regulatory flexibility analysis at the time proposed rules are issued unless the head of the issuing agency determines that the proposed rule would not have a “significant economic impact upon a substantial number of small entities.” The regulatory flexibility analysis must include a description of, among other things, (1) the reasons why the regulatory action is being considered; (2) the small entities to which the proposed rule will apply and, where feasible, an estimate of their number; (3) the projected reporting, recordkeeping, and other compliance requirements of the proposed rule, and (4) any significant alternatives to the proposed rule that accomplish the statutory objectives and minimize any significant economic impact on small entities. The RFA also requires agencies to ensure that small entities have an opportunity to participate in the rulemaking process, and requires the Chief Counsel of the Small Business Administration’s (SBA) Office of Advocacy to monitor agencies’ compliance with the Act. Section 610 of the RFA requires agencies to review those rules that have or will have a significant impact within 10 years of their promulgation to determine whether they should be continued without change or should be amended or rescinded to minimize their impact on small entities. We have reported on the implementation of the RFA on several occasions in the past, and a recurring theme in our reports is the varying interpretation of the RFA’s requirements by federal agencies. For example, in 1991, we reported that each of the four federal agencies that we reviewed had a different interpretation of key RFA provisions. The report pointed out that the RFA provided neither a mechanism to enforce compliance with the act nor guidance on implementing it. We recommended that Congress consider amending the RFA to require that SBA develop criteria for whether and how federal agencies should conduct RFA analyses. In 1994 we examined the 12 SBA annual reports on agencies’ RFA compliance that had been issued since 1980. The reports indicated that agencies’ compliance with the RFA varied widely from one agency to another, and that some agencies’ compliance varied over time. We noted that the RFA does not expressly authorize SBA to interpret key provisions of the statute, and does not require SBA to develop criteria for agencies to follow in reviewing their rules. As a result, different rulemaking agencies were interpreting the statute differently. We said that if Congress wanted to strengthen the implementation of the RFA it should consider amending the act to provide SBA with clearer authority and responsibility to interpret the RFA’s provisions and require SBA to develop criteria on whether and how agencies should conduct RFA analyses. We essentially repeated this recommendation in our 1999 report on the review requirements in section 610 of the RFA that the agencies we reviewed differed in their in their interpretation of those review requirements. We said that if Congress was concerned about these varying interpretations it might wish to consider clarifying those provisions. Last year we reported on the implementation of the RFA at EPA and concluded that, although the agency had established a high threshold for what constitutes a significant economic impact, the agency’s determinations were within the broad discretion that the statute allowed. We again said that Congress could take action to clarify the act’s requirements and help prevent concerns about how agencies are implementing the act. Earlier this year we testified on the need for congressional action in this area, noting that the promise of the RFA may never be realized until Congress or some other entity defines what a “significant economic impact” and a “substantial number of small entities mean in a rulemaking setting. To date, Congress has not acted on our recommendations. The RFA was amended in 1996 by the Small Business Regulatory Enforcement Fairness Act (SBREFA) to, among other things, make certain agency actions under the act judicially reviewable. For example, a small entity that is adversely affected or aggrieved by an agency’s determination that its final rule would not have a significant impact on small entities could generally seek judicial review of that determination within 1 year of the date of the final agency action. In granting relief, a court may remand the rule to the agency or defer enforcement against small entities. SBA’s Office of Advocacy noted in a report marking the 20th anniversary of the RFA that the addition of judicial review has been an incentive for agencies to comply with the act’s requirements, and that small entities are not hesitant to initiate court challenges in appropriate cases. Another provision of SBREFA requires OSHA and the Environmental Protection Agency (EPA) to convene advocacy review panels before publishing an initial regulatory flexibility analysis. Specifically, the agency issuing the regulation (OSHA or EPA) must notify the SBA Chief Counsel for Advocacy and provide information on the draft rule’s potential impacts on small entities and the type of small entities that might be affected. The Chief Counsel then must identify representatives of affected small entities within 15 days of the notification. SBREFA requires the panel to consist of full-time federal employees from the rulemaking agency, OIRA, and SBA’s Chief Counsel for Advocacy. During the advocacy review panel process, the panel must collect the advice and recommendations of representatives of affected small entities about the potential impact of the draft rule. SBREFA also states that the panel must report on the comments received and on the panel’s recommendations no later than 60 days after the panel is convened, and the panel’s report must be made public as part of the rulemaking record. In 1998 we reported on how the first five advocacy review panels were implemented, including OSHA’s panel on occupational exposure to tuberculosis. Agency officials and small entity representatives generally agreed that the panel process was worthwhile, providing valuable insights and opportunities for participation in the rulemaking process. However, some of the small entity representatives believed that the panels should be held earlier in the process, that the materials provided to them and the amount of time provided for their review could be improved, and that the agencies should improve the means by which they obtain comments. We noted that the trigger for the panel process is an agency’s initial determination that a rule may have a significant economic impact on a substantial number of small entities, and again recommended that Congress give some entity clear authority and responsibility to interpret the RFA’s provisions. The Unfunded Mandates Reform Act of 1995 (UMRA) is an example of a statutory requirement that appears to have had little substantive effect on agency rulemaking. For example, title II of UMRA generally requires covered federal agencies to prepare written statements containing specific information for any rule for which a proposed rule was published that includes a federal mandate that may result in the expenditure of $100 million or more in any 1 year by state, local, and tribal governments, in the aggregate, or by the private sector. The statute defined a “federal mandate” as not including conditions imposed as part of a voluntary federal program or as a condition of federal assistance. We examined the implementation of title II of UMRA during its first 2 years and concluded that it appeared to have only limited direct impact on agencies’ rulemaking actions. Most of the economically significant rules promulgated during that period were not subject to the act’s requirements for a variety of reasons (e.g., no proposed rule, or the mandates were a condition of federal assistance or part of a voluntary program). There were only two rules without an UMRA written statement that we believed should have had one (EPA’s proposed national ambient air quality standards for ozone and particulate matter), but even in those rules we believed that the agency had satisfied the substantive UMRA written statement requirements. Also, title II contains exemptions that allowed agencies not to take certain actions if they determined that they were duplicative or not “reasonably feasible.” The title also required agencies to take certain actions that they already were required to take or had completed or that were already under way. Another crosscutting rulemaking requirement of note is the National Environmental Policy Act of 1969 (NEPA). NEPA requires federal agencies to include in every recommendation or report related to “major Federal actions significantly affecting the quality of the human environment” a detailed statement on the environmental impact of the proposed action. According to the act and its implementing regulations developed by the Council on Environmental Quality, the statement must delineate the direct, indirect, and cumulative effects of the proposed action. Agencies are also required to include in the statement (1) any adverse environmental effects that cannot be avoided should the proposal be implemented, (2) alternatives to the proposed action, (3) the relationship between local short-term uses of the environment and the maintenance and enhancement of long-term productivity, and (4) any irreversible and irretrievable commitments of resources that would be involved if the proposed action should be implemented. Before developing any such environmental impact statement, NEPA requires the responsible federal official to consult with and obtain comments of any federal agency that has jurisdiction by law or special expertise with respect to any environmental impact involved. Agencies must make copies of the statement and the comments and views of appropriate federal, state, and local agencies available to the president, the Council on Environmental Quality, and to the public. The adequacy of an agency’s environmental impact statement is subject to judicial review. The crosscutting statutory requirements that I have just listed are by no means the only statutory requirements that guide agency rulemaking. Regulations generally start with an act of Congress and are the means by which statutes are implemented and specific requirements are established. The statutory basis for a regulation can vary in terms of its specificity, from very broad grants of authority that state only the general intent of the legislation to very specific requirements delineating exactly what regulatory agencies should do and how they should do it. In 1999, we issued a report that examined this issue of regulatory discretion, and we reported that in many of the cases that we examined the statutes gave the agencies little or no discretion in establishing regulatory requirements that businesses viewed as burdensome. For example, we concluded that the Occupational Safety and Health Act gave OSHA no discretion in whether to hold companies (rather than individual employees) responsible for health and safety violations. Also, as other witnesses today will likely describe in detail, OSHA also follows numerous procedural and consultative steps before issuing a rule that may or may not be statutorily driven. For example, interested parties who comment on proposed OSHA rules may request a public hearing when none has been announced in the notice. When such a hearing is requested, OSHA says it will schedule one, and will publish in advance the time and place for it in the Federal Register. Therefore, federal agencies must be aware of the statutory requirements underlying their regulations, and must craft rules that are consistent with those requirements. Similarly, agency rulemaking is often significantly influenced by court decisions interpreting statutory requirements, and OSHA rulemaking is a good case in point. For example, in its 1980 “Benzene” decision, the Supreme Court ruled that, before promulgating new health standards, OSHA must demonstrate that the particular chemical to be regulated poses a “significant risk” under workplace conditions permitted by current regulations. The court also said that OSHA must demonstrate that the new limit OSHA proposes will substantially reduce that risk. This decision effectively requires OSHA to evaluate the risks associated with exposure to a chemical and to determine that these risks are “significant” before issuing a standard. Other court decisions have required OSHA rulemaking to demonstrate the technical and economic feasibility of its requirements. During the past 20 years, each president has issued executive orders and/or presidential directives designed to guide the federal rulemaking process, often with the goal of reducing regulatory burden. Although independent regulatory agencies are generally not covered by these requirements, they are often encouraged to follow them. One of the most important of the current set of executive orders governing the rulemaking process is Executive Order 12866, “Regulatory Planning and Review,” which was issued by President Clinton in September 1993. Under the order, non-independent regulatory agencies are required to submit their “significant” rules to OIRA before publishing them in the Federal Register at both the proposed and final rulemaking stages. OIRA must generally notify the agency of the results of its review of a proposed or final rule within 90 calendar days after the date the rule and related analyses are submitted. The agencies are required to submit the text of the draft regulatory action and an assessment of the potential costs and benefits of the action to OIRA. They are required to submit a detailed economic analysis for any regulatory actions that are “economically significant” (e.g., have annual effects on the economy of $100 million or more). According to the executive order, the analyses should include an assessment of the costs and benefits anticipated from the action as well as the costs and benefits of “potentially effective and reasonably feasible alternatives to the planned regulation.” The order also states that, in choosing among alternatives, an agency should select those approaches that maximize net benefits and “base its decisions on the best reasonably obtainable scientific, technical, economic, and other information concerning the need for, and consequences of, the intended regulation.” In January 1996, OMB issued “best practices” guidance on preparing cost- benefit analyses under the executive order. The guidance gives agencies substantial flexibility regarding how the analyses should be prepared, but also indicates that the analyses should contain certain basic elements and should be “transparent”—disclosing how the study was conducted, what assumptions were used, and the implications of plausible alternative assumptions. At the request of Members of Congress, we have examined agencies’ economic analyses both in our reviews of selected federal rules issued by multiple agencies and in the context of particular regulatory actions. In one of our reviews, we reported that some of the 20 economic analyses from five agencies that we reviewed did not incorporate all of the best practices set forth in OMB’s guidance. Five of the analyses did not discuss alternatives to the proposed regulatory action, and, in many cases, it was not clear why the agencies used certain assumptions. Also, five of the analyses did not discuss uncertainty associated with the agencies’ estimates of benefits and/or costs, and did not document the agencies’ reasons for not doing so. We recommended that OMB’s best practices guidance be amended to provide that economic analyses should (1) address all of the best practices or state the agency’s reason for not doing so, (2) contain an executive summary, and (3) undergo an appropriate level of internal or external peer review by independent experts. To date, OMB has not acted on our recommendations. Executive Order 12866 also includes several other notable requirements. For example, section 5 of the order requires agencies to periodically review their existing significant regulations to determine whether they should be modified or eliminated. In March 1995, President Clinton reemphasized this requirement by directing each agency to conduct a page-by-page review of all existing regulations. In June 1995, the President announced that 16,000 pages had been eliminated from the Code of Federal Regulations. We reported on this review effort in October 1997, noting that the page elimination totals that four agencies reported did not take into account pages that had been added while the eliminations took place. We also said that about 50 percent of the actions taken appeared to have no effect on the burden felt by regulated entities, would have little effect, or could increase regulatory burden. Another part of the executive order requires agencies to prepare an agenda of all regulations under development or review and a plan describing in greater detail the most important regulatory actions that the agency expects to issue in proposed or final form in the next fiscal year or thereafter. The order also requires agencies to identify for the public in a complete, clear, and simple manner the substantive changes that are made to rules while under review at OIRA and, separately, the changes made at the suggestion or recommendation of OIRA. In January 1998 we reported on the implementation of this requirement, and concluded that the four agencies we reviewed had complete documentation available to the public of these changes for only about one-quarter of the 122 regulatory actions that we reviewed. OSHA had complete documentation available for one of its three regulatory actions, but the information was contained in files separate from the public rulemaking docket to ensure that it did not become part of the official rulemaking record and, therefore, subject to litigation. Executive Order 12612 on “Federalism,” issued by President Reagan in 1987, was similar to the RFA in that it gave federal agencies broad discretion to determine the applicability of its requirements. The executive order required the head of each federal agency to designate an official to be responsible for determining which proposed policies (including regulations) had “sufficient federalism implications” to warrant preparation of a federalism assessment. If the designated official determined that such an assessment was required, it had to accompany any proposed or final rule submitted to OMB for review. We examined the preambles of more than 11,000 final rules that federal agencies issued between April 1996 and December 1998 to determine how often they mentioned the executive order and how often the agencies indicated that they had prepared a federalism assessment. Our work indicated that Executive Order 12612 had relatively little visible effect on federal agencies’ rulemaking actions during this time frame. The preambles to only 5 of the more than 11,000 rules indicated that the agencies had conducted a federalism assessment. Most of these rules were technical or administrative in nature, but 117 were economically significant rules. However, the agencies prepared a federalism assessment for only one of these economically significant rules. The lack of assessments for these rules is particularly surprising given that the agencies had previously indicated that 37 of the rules would affect state and local governments, and said that 21 of them would preempt state and local laws in the event of a conflict. Federal agencies had broad discretion under Executive Order 12612 to determine whether a proposed policy has “sufficient” federalism implications to warrant the preparation of a federalism assessment. Some agencies have clearly used that discretion to establish an extremely high threshold. For example, in order for an EPA rule to require a federalism assessment, the agency’s guidance said that the rule must, among other things, have an “institutional” effect on the states (not just a financial effect), and affect all or most of the states in a direct, causal manner. Under these standards, an EPA regulation that has a substantial financial effect on all states, but does not affect the “institutional” role of the states, would not require a federalism assessment. Executive Order 12612 was revoked by President Clinton’s Executive Order 13132 on “Federalism,” which was issued August 4, 1999, and took effect on November 2, 1999. Like the old executive order, the new order provides agencies with substantial flexibility to determine which of their actions have “federalism implications” and, therefore, when they should prepare a “federalism summary impact statement.” Non-independent regulatory agencies are also covered by an array of other executive orders and presidential directives or memoranda. These executive requirements include: Executive Order 13175, which requires consultation and coordination with Indian tribal governments. Agencies submitting final rules to OIRA under Executive Order 12866 must certify that this order’s requirements were “met in a meaningful and timely manner.” Executive Order 12988 on civil justice reform, which generally requires agencies to review existing and new regulations to ensure that they comply with specific requirements (e.g., “eliminate drafting errors and ambiguity” and “provide a clear legal standard for affected conduct”) to improve regulatory drafting in order to minimize litigation. Executive Order 12630 on constitutionally protected property rights, which says each agency “shall be guided by” certain principles when formulating or implementing policies that have “takings” implications. For example, the order says that private property should be taken only for “real and substantial threats,” and “be no greater than is necessary.” Executive Order 12898 on environmental justice, which says (among other things) that each agency must develop a strategy that identifies and addresses disproportionately high and adverse human health or environmental effects of its programs, policies, and activities on minority populations and low income populations. It also says that agencies should identify rules that should be revised to meet the objectives of the order. Executive Order 13045 on protection of children from environmental health risks and safety risks. The order says that for any substantive rulemaking action that is likely to result in an economically significant rule that concerns an environmental health risk or safety risk that may disproportionately affect children, the agency must provide OIRA (1) an evaluation of the environmental or safety effects on children and (2) an explanation of why the planned regulation is preferable to other potentially effective and reasonably feasible alternatives. Executive Order 12889 on the North American Free Trade Agreement, which generally requires agencies subject to the APA to provide at least a 75-day comment period for any “proposed Federal technical regulation or any Federal sanitary or phytosanitary measure of general application.” Various presidential memoranda or directives. For example, a March 4, 1995, presidential memorandum directed agencies to, among other things, focus their regulatory programs on results not process and expand their use of negotiated rulemaking. A June 1, 1998, presidential directive required agencies to use plain language in proposed and final rulemaking documents. One statutory requirement that I did not mention previously but that can clearly affect agency rulemaking is the Congressional Review Act (CRA), which was included as part of SBREFA in 1996. Under the CRA, before a final rule can become effective it must be filed with Congress and GAO. If OIRA considers the rule to be “major” (e.g., has a $100 million impact on the economy), the agency must delay its effective date by 60 days after the date of publication in the Federal Register or submission to Congress and GAO, whichever is later. Within 60 legislative or session days, a Member of Congress can introduce a resolution of disapproval that, if adopted by both Houses and signed by the president, can nullify the agency’s rule. | This testimony discusses the procedural and analytical rulemaking requirements applicable to the Occupational Safety and Health Administration (OSHA) and other federal regulatory agencies. GAO found that the rulemaking requirements that have been placed on OSHA and other agencies are voluminous and require a wide range of procedural, consultative, and analytical actions on the part of the agencies. Federal agencies sometimes take years to develop final rules, and the requirements are not as effective as expected or as they could be. This lack of effectiveness can be traced to how the requirements have been implemented and the requirements themselves. |
In general, a quality product is one that is delivered on time, performs as expected, and can be depended on to perform when needed, at an affordable cost. This applies whether the customer is an individual purchasing a simple consumer good, such as a television, a hospital purchasing medical imaging equipment to help doctors treat cancer patients, or DOD purchasing sophisticated weapons for its warfighters to use on the battlefield. For about 3 decades, DOD based its quality requirements on a military standard known as MIL-Q-9858A, and its quality assurance practices were oriented toward discovering defects through inspections. In 1994, the Secretary of Defense announced that commercial quality standards should replace MIL-Q-9858A. The intent was to remove military-unique requirements that could present barriers to DOD in accessing the commercial supplier base. Currently, responsibilities for quality policy and oversight fall under the Systems and Software Engineering organization, within the Office of the Secretary of Defense. Over the past 20 years, commercial companies have had to dramatically improve quality in response to increased competition. Many companies moved from inspection-oriented quality management practices—where problems are identified and corrected after a product is produced—to a process in which quality is designed into a product and manufacturing processes are brought in statistical control to reduce defects. Many companies have also adopted commercial quality standards, such as ISO 9001. This standard was developed by the International Organization for Standardization, a non-governmental organization established in 1947 to facilitate the international coordination and unification of industrial standards. Similar to DOD’s MIL-Q-9858A, ISO 9001 includes requirements for controlling a product’s design and development, and production, as well as processes for oversight and improvement. Some industries, such as the automotive and aerospace industries, also have standards specific to their sector based on the ISO 9001. Because supplier parts account for a substantial amount of the material value of many companies’ products, companies may require their suppliers to adopt the same standards. In practice, DOD and its prime contractors both participate in activities that contribute to weapon system quality. DOD plays a large role in quality when it sets key performance parameters, which are the most important requirements DOD wants prime contractors to focus on during development. For example, if reliability is one of those key performance parameters, then prime contractors are expected to focus on it during weapon system design. Prime contractors employ quality assurance specialists and engineers to assess the quality and reliability of parts they receive from suppliers, as well as the overall weapon system. DOD has its own quality specialists within the Defense Contract Management Agency and the military services, such as the Navy’s Supervisor of Shipbuilding organization. DOD’s quality specialists oversee prime contractors’ design, manufacturing, and supplier management activities; oversee selected supplier manufacturing activities; and conduct final product inspections prior to acceptance. GAO previously reported on DOD quality practices in 1996. At that time, we reported that numerous weapon system programs had historically had quality problems in production because designs were incomplete. The B-2 bomber program and the C-17 Airlifter program, for example, encountered major manufacturing problems because they went forward with unstable designs and relied on inspections to find defects once in production. Since 1996, GAO has recommended several times that DOD adopt a knowledge- based acquisition approach used by leading commercial companies to develop its weapon systems. Under this approach, high levels of knowledge are demonstrated at critical decision points in the product development process, which results in successful product development outcomes. Systems engineering is a key practice that companies use to build quality into new products. Companies translate customers’ broad requirements into detailed requirements and designs, including identifying requisite technological, software, engineering, and production capabilities. Systems engineering also involves performing verification activities, including testing, to confirm that the design satisfies requirements. Products borne out of a knowledge-based approach stand a significantly better chance to be delivered on time, within budget, and with the promised capabilities. Related GAO products, listed at the back of this report, provide detailed information about the knowledge-based approach. Although major defense contractors have adopted commercial quality standards in recent years, quality and reliability problems persist in DOD weapon systems. On the 11 weapon systems GAO reviewed, these problems have resulted in billions of dollars in cost overruns, years of schedule delays, and reduced weapon system availability. Prime contractors’ poor systems engineering practices related to requirements analysis, design, and testing were key contributors to these quality problems. We also found problems with manufacturing and supplier quality that contributed to problems with DOD weapon systems. Senior officials from the prime contractor companies we contacted said that they agreed with our assessment of the causes of the quality problems of weapon system programs we reviewed and that disciplined processes help improve overall quality. Quality problems caused significant cost and/or schedule delays in the 11 weapon systems we reviewed. Figure 1 shows the types of problems we found and the resulting impacts. Appendix II contains detailed information about each of the programs’ quality problems. Quality problems occurred despite the fact that each of the prime contractors for these programs is certified to commercial quality standards and most provided us with quality plans that address systems engineering activities such as design, as well as manufacturing, and supplier quality. However, quality problems in these areas point to a lack of discipline or an inconsistency in how prime contractors follow through on their quality plans and processes. GAO’s past work has identified systems engineering as a key practice for ensuring quality and achieving successful acquisition outcomes. Systems engineering is a sequence of activities that translates customer needs into specific capabilities and ultimately into a preferred design. These activities include requirements analysis, design, and testing in order to ensure that the product’s requirements are achievable and designable given available resources, such as technologies. In several of the DOD weapon programs we reviewed, poor systems engineering practices contributed to quality problems. Examples of systems engineering problems can be found on the Expeditionary Fighting Vehicle, Advanced Threat Infrared Countermeasure/Common Missile Warning System, and Joint Air-to- Surface Standoff Missile programs. Design problems have hampered the development of the Marine Corps’ Expeditionary Fighting Vehicle. The system, built by General Dynamics, is an amphibious vehicle designed to transport troops from ships offshore to land at higher speeds and from farther distances than its predecessor. According to program officials, prime contractor design and engineering changes were not always passed to suppliers, resulting in supplier parts not fitting into assemblies because they were produced using earlier designs. Systems engineering problems have also contributed to poor vehicle reliability, even though reliability was a key performance parameter. Consequently, the prime contractor was only able to demonstrate 7.7 hours between mission failures, which was well short of the 17 hours it needed to demonstrate in pre-production testing. Subsequently, the vehicle’s development phase has been extended. Program officials estimate that this extension, which will primarily focus on improving reliability, will last an additional 4 years at an estimated cost of $750 million. For several other weapon systems, inadequate testing was another systems engineering problem. The Army’s Advanced Threat Infrared Countermeasure/Common Missile Warning System program, developed by BAE Systems, is designed to defend U.S. aircraft from advanced infrared- guided missiles. Reliability problems related to the Advanced Threat Infrared Countermeasure jam head forced the Army to initiate a major redesign of the jam head in fiscal year 2006, and fielding of the subsystem has been delayed until fiscal year 2010. According to a prime contractor official, the reliability problems were caused, at least in part, by inadequate reliability testing. Likewise, the Joint Air-to-Surface Standoff Missile program, developed by Lockheed Martin, has experienced a number of flight test failures that have underscored product reliability as a significant problem. Ground testing, which prime contractor officials said could have identified most of the failure modes observed in flight testing, did not occur initially. Prime contractor officials indicated that ground testing was not considered necessary because the program was a spin-off of a previous missile program and there was an urgent need for the new missile. As a result of the test failures, the program has initiated a reliability improvement effort that includes ground and flight testing. A program official reported that the cost of reliability improvements for fiscal years 2006 and 2007 totaled $39.4 million. GAO’s past work addresses the importance of capturing manufacturing knowledge in a timely manner as a means for ensuring that an organization can produce a product within quality targets. Prime contractor activities to capture manufacturing knowledge should include identifying critical characteristics of the product’s design and then the critical manufacturing processes to achieve these characteristics. Once done, those processes should be proven to be in control prior to production. This would include making work instructions available, preventing and removing foreign object debris in the production process, and establishing criteria for workmanship. However, prime contractors’ lack of controlled manufacturing processes caused quality problems on several DOD weapon programs, including the F-22A and LPD 17 programs. The F-22A, a fighter aircraft with air-to-ground attack capability being built by Lockheed Martin, entered production with less than 50 percent of the critical manufacturing processes in control. In 2000, citing budgetary constraints and specific hardware quality problems that demanded attention, the Air Force abandoned its efforts to get manufacturing processes in control prior to the start of production. Subsequently, the contractor experienced a scrap, rework, and repair rate of about 30 percent on early-production aircraft. The contractor also experienced major problems with the aircraft canopy. According to program officials, the aircraft uses a first-of-a-kind canopy, with an external metallic stealth layer. The contractor did not bring its manufacturing processes in control and the canopy cracked near the mounting holes. This problem was discovered in March 2000 and temporarily grounded the flight test aircraft. In addition, in 2006 a pilot was trapped in an F-22A for 5 hours when a defective activator prevented him from opening the canopy. According to the Air Force, when production began in 2001, the prime contractor should have been able to demonstrate that the F-22A could achieve almost 2 flying hours between maintenance. However, at that time, the contractor could demonstrate only about 40 minutes. Six years later, the contractor increased the flying hours to 97 minutes mean time, short of the Air Force’s current 3-hour requirement. The program now has budgeted an additional $400 million to improve the aircraft’s reliability and maintainability. Northrop Grumman, the prime contractor for the LPD 17, the first ship of a new class of amphibious transport dock ships, delivered the ship to the Navy in 2005 with many quality problems resulting from poor manufacturing practices. For example, the program experienced problems with non-skid coating applications because the company did not keep the boat surface free from dirt and debris when applying the coating, which caused it to peel. As of late 2007, the problem was not fixed. In addition, the ship encountered problems with faulty welds on piping used in some of the ship’s hydraulic applications. According to the prime contractor, they could not verify that welds had been done properly. This problem required increased rework to correct the problems and reinspect all the welds. Had the problem not been discovered and weld failure had occurred, the crew and the ship could have been endangered. These problems, as well as many others, contributed to a 3-year delay and cost increase of $846 million in delivering the ship to the Navy. In June 2007, the Secretary of the Navy sent a letter to the Chairman of the Board of Northrop Grumman expressing his concerns about the contractor’s ability to construct and deliver ships that meet Navy quality standards and to meet agreed-to cost and schedule commitments. Management of supplier quality is another problem area for DOD weapon systems. Supplier quality is particularly important because more that half of the cost of a weapon system can be attributed to material received by the prime contractor from its supplier base. While DOD prime contractors told us that they manage and control the quality of parts and material they receive from their suppliers with the help of performance reviews and process audits, we found supplier quality problems on seven of the weapon systems we reviewed. Two examples are the Wideband Global SATCOM and Patriot Advanced Capability-3 programs. Boeing Integrated Defense Systems is the prime contractor for the Air Force and Army’s Wideband Global SATCOM communications satellite. Boeing Integrated Defense Systems discovered that one of its suppliers had installed certain fasteners incorrectly. As a result, 1,500 fasteners on each of the first three satellites had to be inspected or tested, and 148 fasteners on the first satellite had to be reworked. The DOD program office reported that the resulting 15-month schedule slip would add rework and workforce costs to the program and delay initial operating capability by 18 months. A prime contractor official estimated the cost to fix the problem was about $10 million. In 2006, a supplier for the Patriot Advanced Capability-3 program, a long- range system that provides air and missile defense for ground combat forces, accepted non-conforming hardware for a component for the missile’s seeker. The seeker contractor had to re-inspect components and some were returned for rework. As a result of this and other problems involving poor workmanship and inadequate manufacturing controls, the supplier facility was shut down for 7 months, delaying delivery of about 100 missiles. We met with senior quality officials at the prime contractor companies we included in this review to discuss the problems we found. For the most part, they agreed with our assessment, and that the discipline with which a company implements its processes is a key contributor to quality outcomes. The officials discussed the importance of quality and how they are attempting to improve quality across their companies. This includes the use of Six Sigma, a tool for measuring defects and improving quality, as well as independent program reviews and improving design processes. The senior quality officials also identified factors they believe affect the quality of DOD weapon systems, including insufficient attention to reliability by DOD during development and the prime contractor’s lack of understanding of weapon system requirements, including those for testing. While there are similarities between the quality management practices of DOD prime contractors and leading commercial companies in our review, the discipline with which leading companies implement their practices contributes to the high quality of their products. According to company officials we contacted, reliability is a paramount concern for them because their customers demand products that work, and the companies must develop and produce high-quality products to sustain their competitive position in the marketplace. Leading commercial companies use disciplined, well-defined, and institutionalized practices for (1) systems engineering to ensure that a product’s requirements are achievable with available resources, such as technologies; (2) manufacturing to ensure that a product, once designed, can be produced consistently with high quality and low variability; and (3) supplier quality to ensure that their suppliers are capable of delivering high-quality parts. These practices, which were part of the companies’ larger product development processes, and other tools such as Six Sigma, provided an important foundation for producing quality products and continually improving performance. Several of the companies we met with discussed how they use systems engineering as a key practice for achieving quality outcomes. As part of Siemens Medical Solutions’ standard product development process, the company validates that product requirements are sufficiently clear, precise, measurable, and comprehensive. They ensure that requirements address quality, including requirements for reliability and readiness prior to making a commitment to developing and building a new product. Officials with Boeing Commercial Airplanes say they have shifted their view of quality into a more proactive approach, which includes a focus on “mistake-proofing” designs so that they can be assembled only one way. To help assess the producibility of critical parts designs, the company has also developed a tool that rates different attributes of the design, including clarity of engineering requirements, consequences of defects on performance or manufacturability, and verification complexity. Company officials say they use the tool’s ratings to modify designs to ensure that parts will be less prone to manufacturing and assembly error, and that its use has resulted in lower costs for scrap, rework, and repair and fewer quality problems. Space Systems/Loral also relies on well-defined and disciplined processes to develop and produce satellites. Because the company’s customers expect satellites to perform for up to 15 years, product reliability is paramount and company officials say that using systems engineering to design reliability into a satellite is essential. As part of its systems engineering activities, the company performs reliability assessments to verify that satellite components and subsystems will meet reliability requirements and to identify potential hardware problems early in the design cycle. Space Systems/Loral officials also discussed testing and its importance to developing products. For significant new product developments, Space Systems/Loral employs highly accelerated life testing to find weak links in a design and correct them to make the product more robust before going into production. As a result of the company’s disciplined quality management practices, new satellite components— such as lithium-ion batteries, stationary plasma thrusters, and a satellite control system—have over 80 million hours of operation in orbit with only one component failure, according to company data. Several company officials discussed the importance of having controlled manufacturing processes, and described several approaches to reduce variability and the likelihood of defects. These approaches greatly increase the likelihood that a product, once designed, can be produced consistently and with high quality and low variability. In this way, they reduce waste and increase a product’s reliability in the field. Early in its product development process, Cummins, a manufacturer of diesel and natural gas-powered engines, establishes a capability growth plan for manufacturing processes. This increases the probability that the manufacturing process will consistently produce parts that meet specifications. Prior to beginning production, Cummins completes what it calls “alpha” and “beta” builds, which are prototypes intended to validate the product’s design and production processes. Cummins officials noted that these activities allow them to catch problems earlier in development, when problems are less costly to fix. Officials from Kenworth, a manufacturer of heavy- and medium-duty trucks, described several initiatives it uses to improve manufacturing process controls. For example, the company has a new electronic system for process documents. Workers on the manufacturing floor used to rely on paper installation instructions, and sometimes workers used outdated instructions. Kenworth officials say that converting to an electronic system ensures that all workers use the most current process configuration and reduces rework. For a selected number of processes, Kenworth has also developed documents that include pictures as well as engineering specifications to ensure that workers follow the correct processes, and performs audits to assess whether workers are properly trained and know where to go if they have questions regarding the process. At several of the companies we visited, officials reported that supplier parts accounted for a substantial amount of the overall product value. Companies we met with systematically manage and oversee their supply chain through such activities as regular supplier audits and performance evaluations of quality and delivery, among other things. Several officials noted that their supplier oversight focuses on first-tier suppliers, with limited interaction and oversight of lower-tier suppliers. However, Kenworth officials said they hold their first-tier suppliers accountable for quality problems attributable to lower-tier suppliers. Leading commercial companies we met with set high expectations for supplier quality. Boeing Commercial Airplanes categorizes its suppliers by rates of defective parts per million. To achieve the highest rating level, a supplier must exhibit more than 99 percent part conformance, and company officials said they have been raising their supplier quality expectations over time. The organization has taken steps to reduce the number of direct suppliers and retain higher-performing suppliers in the supply base. Similarly, suppliers of major components for Siemens Medical Solutions’ ultrasound systems must provide conforming products 98 percent of the time, and the company will levy financial penalties against suppliers that do not meet this standard. Other companies also financially penalized suppliers for providing nonconforming parts. Several company officials discussed how a focus on improving product development processes and product quality served as the foundation for their systems engineering, manufacturing, and supplier quality practices. Officials with Space Systems/Loral discussed how they adopted a more disciplined product development process following quality problems in the 1990s with some of its satellites. This included creating companywide product development processes, adopting a formal program that institutionalized an iterative development process, and implementing strict documentation requirements and pass/fail criteria. The company also established an oversight organization to ensure that processes are followed. As a result, the first-year failure rate for Space Systems/Loral’s satellites decreased by approximately 50 percent from 2000 through 2006. Likewise, Cummins officials told us that quality problems following the initial release of their ISX engine were a major factor in the implementation of their current product development process. This includes review gates to ensure process compliance and management reviews that use knowledge-based approaches for evaluating projects. Cummins and Kenworth also use tools such as Six Sigma to define, measure, analyze, control, and continually improve their processes. For example, Cummins applies Six Sigma to its technology development, design, and production activities. The company also expects its critical suppliers to implement Six Sigma programs to improve quality and customer satisfaction. As a result of implementing initiatives such as Six Sigma, Cummins officials reported that the company’s warranty costs have declined substantially in the last several years. Kenworth also uses Six Sigma to drive efficiencies into the organization’s work processes, particularly in the design phase of new product development and in controlling manufacturing processes. Kenworth requires its first-tier suppliers to participate in a Six Sigma program. Company officials estimated that Six Sigma projects saved its Chillicothe, Ohio, facility several million dollars in 2006. In addition, each of the commercial companies we met with collected and used data to measure and evaluate their processes and products. This helps them gauge the quality of their products and identify areas that need improvement. For example, Cummins tracks warranty costs as a measure of product quality, while Siemens Medical Solutions measures manufacturing process yields for its ultrasound systems. The quality problems in our case studies and the practices that relate to them—whether systems engineering, manufacturing, or supplier quality practices—are strongly influenced and often the result of larger environmental factors. DOD’s acquisition environment is not wholly conducive to incentivizing prime contractors to efficiently build high- quality weapon systems—ones that perform as expected, can be depended on to perform when needed, and are delivered on time and within cost estimates. During systems development, DOD usually pays for a contractor’s best efforts, which can include efforts to achieve overly optimistic requirements. In such an environment, seeking to achieve overly optimistic requirements along with a lack of oversight over the development process contributes to quality problems. In contrast, commercial companies we visited operate in an environment that requires their own investment of significant funds to develop new products before they are able to sell them and recoup that investment. This high-cost environment creates incentives for reasonable requirements and best practices, as well as continuous improvement in systems engineering, manufacturing, and supplier quality. DOD uses cost-reimbursement contracts with prime contractors for the development of its weapon systems. In this type of contract arrangement, DOD accepts most of the financial risks associated with development because of technical uncertainties. Because DOD often sets overly optimistic requirements for new weapon systems that require new and unproven technologies, development cycles can take up to 15 years. The financial risk tied to achieving these requirements during development is not borne by the contractor in this environment, but by the government. This environment provides little incentive for contractors to utilize the best systems engineering, manufacturing, and supplier quality practices discussed earlier in this report to ensure manageable requirements, stable designs, and controlled manufacturing processes to hold costs down. Finally, DOD’s quality organizations, which collect information about prime contractors’ quality systems and problems, provide limited oversight of prime contractor activities and do not aggregate quality data in a manner that helps decision makers assess or identify systemic quality problems. DOD’s ability to obtain a high-quality weapon system is adversely impacted by an environment where it both (1) assumes most of the financial risks associated with technical or cost uncertainties for the systems development and (2) sets requirements without adequate systems engineering knowledge. Without requirements that have been thoroughly analyzed for feasibility, development costs are impossible to estimate and are likely to grow out of control. DOD typically assumes most of the financial risk associated with a new weapon system’s development by establishing cost reimbursement contracts with prime contractors. In essence, this means that prime contractors are asked to give their best effort to complete the contract and DOD pays for allowable costs, which often includes fixing quality problems experienced as part of the effort. As stated earlier, these problems can cost millions of dollars to fix. For example, DOD as the customer for the Expeditionary Fighting Vehicle signed a cost reimbursement contract with the prime contractor, General Dynamics, to develop a new weapon system that would meet performance and reliability requirements that had not yet been adequately informed by systems engineering analysis. Once General Dynamics performed a detailed requirements analysis, it informed DOD that more resources would be needed to meet the key reliability requirement established earlier. DOD decided not to invest the additional money at that time. However, when the vehicle was unable to meet its reliability goal prior to moving into production, DOD eventually decided to invest an additional $750 million into its development program to meet the reliability goal. Often DOD enters into contracts with prime contractors before requirements for the weapon systems have been properly analyzed. For example, in March 2007 we reported that only 16 percent of the 62 DOD weapon system programs we reviewed had mature technologies to meet requirements at the start of development. The prime contractors on these programs ignored best systems engineering practices and relied on immature technologies that carry significant unknowns about whether they are ready for integration into a product. The situation is exacerbated when DOD adds or changes requirements to reflect evolving threats. Prime contractors must then spend time and resources redesigning the weapon system, flowing down the design changes to its suppliers, and developing new manufacturing plans. In some cases, special manufacturing tools the prime contractor thought it was going to use might have to be scrapped and new tooling procured. Lack of detailed requirements analysis, for example, caused significant problems for the Advanced Threat Infrared Countermeasure/Common Missile Warning System program. Prior to 1995, the services managed portions of the program separately. Then, in 1995, DOD combined the efforts and quickly put a developer on contract. This decision resulted in significant requirements growth and presented major design and manufacturing difficulties for the prime contractor. It took over a year to determine that the tactical fixed-wing aircraft requirements were incorrect. The extent of the shortfall, however, did not become evident until the critical design review and numerous changes were required in the contract statement of work. More than 4 years after the system’s critical design review, the sensor units were built in prototype shops, with engineers only then trying to identify critical manufacturing processes. Further, sensor manufacturing was slowed by significant rework, and at one point was halted while the contractor addressed configuration control problems. The Navy and Air Force, which required the system for fixed- wing aircraft, dropped out of the program in 2000 and 2001, respectively. Ultimately, quality is defined in large part by reliability. But, in DOD’s environment, reliability is not usually emphasized when a program begins, which forces the department to fund more costly redesign or retrofit activities when reliability problems surface later in development or after a system is fielded. The F-22A program illustrates this point. Because DOD as the customer assumed most of the financial risk on the program, it made the decision that system development resources primarily should be focused on requirements other than reliability, leading to costly quality problems. After 7 years in production, the Air Force had to budget an additional unplanned $400 million for the F-22A to address numerous quality problems and help the system achieve its baseline reliability requirements. DOD oversight of prime contractor activities varies and has decreased as its quality assurance workforce has decreased. Weapon system progress reviews at key decision points are a primary means for DOD to oversee prime contractor performance in building high-quality systems, but they are not used consistently across programs. The purpose of the reviews is to determine if the program has demonstrated sufficient progress to advance to the next stage of product development or to enter production. The department has developed decision criteria for moving through each phase of development and production; and DOD’s acquisition executive has the authority to prevent programs from progressing to later stages of development if requisite knowledge has not been attained. Unfortunately most programs are allowed to advance without demonstrating sufficient knowledge. For example, in our recent review of 62 DOD weapon systems, we found that only 27 percent of the programs demonstrated that they had attained a stable design at the completion of the design phase. In addition, as a result of downsizing efforts over the past 15 years, DOD’s oversight of prime contractor and major supplier manufacturing processes varies from system to system. DOD quality officials stated that they have had to scale back on the amount of oversight they can provide, focusing only on the specific areas that the weapon system program managers ask them to review. It is unclear what impact the reduction in quality assurance specialists and the reduction of oversight has had on the department’s ability to influence quality outcomes. However, in the case of the Advanced SEAL Delivery System, a lapse in effective management oversight exercised by both the government and contractor contributed to very late discovery of costly quality problems. DOD quality organizations such as the Defense Contract Management Agency do capture a significant amount of information electronically about the quality of DOD weapon systems through audits and corrective action reports. They collect quality data on a program by program basis and share information about certain types of deficiencies and nonconforming parts they found. While the organizations are looking for additional opportunities to share information, they do not currently aggregate and consolidate the information in a manner that would allow the department to determine the overall quality of products it receives from prime contractors or to identify quality related systemic problems or trends with its prime contractors. Commercial companies must develop and deliver high-quality, highly capable products to markets on-time or suffer financial loss. The companies face competition and, therefore, their customers can choose someone else’s products when they are not satisfied. It is this environment that incentivizes manufacturers to implement and use best practices to improve quality and reduce cost while delivering on-time. Commercial customers must set achievable product requirements for their manufacturers that they know will result in a reliable, high-quality, and desirable product that can be delivered on-time. Manufacturers then get their key manufacturing processes in control to reduce inconsistencies in the product. Commercial customers understand the need to monitor and track manufacturer and supplier quality performance over time to determine which companies they want to do business with in the future or to identify problem areas that need to be corrected. Commercial customers we visited—American Airlines and Intelsat— expect to operate their products for 30 and 15 years, respectively. The companies focus a great deal of attention on setting performance and reliability goals that manufacturers like Boeing Commercial Airplanes and Space Systems/Loral must meet in order for them to purchase their products. This provides a strong, direct incentive for manufacturers and their customers to ensure that requirements are clear and achievable with available resources, including mature technologies, before the manufacturer will invest in a product’s development. For example, Intelsat expects its satellites to be available at least 99.995 percent of the time. To meet this goal, Intelsat expects its manufacturers to use mature technologies and parts where the reliability is already known. There are several reasons that drive this approach. The most obvious one is that there is no way to fix mechanical problems once a satellite has been launched. Another reason is that the company must credit television networks, telephone companies, or cable companies for any loss of service. The company also insures their satellite for launch plus the first year of in-orbit service. Having a proven record of in-orbit performance and using reliable and flight proven technology are two important factors that help the company get favorable terms from the insurance underwriters. And, the company does not want to spend a large sum of money for a replacement satellite prior to its design life since it will negatively impact the company’s financial performance. In the commercial environment, manufacturers are motivated to develop and provide high-quality products because their profit is tied to customer expectations and satisfaction. For example, American Airlines makes an initial payment to Boeing Commercial Airplanes when it places an order for new aircraft, but will not make final payment until it is satisfied that their requirements have been met. In an another example, Cummins officials discussed how they were motivated to adopt more disciplined product development processes following the development effort for one of its highest selling engines, in the late 1990s. According to company officials, the design requirements were unstable from the start of development. They were changed and added to as development progressed, often without the benefit of timely and disciplined requirements analysis to ensure they could be met for the estimated investment cost. There were conflicting requirements (weight, size, performance, and fuel economy) that made development difficult. In addition, Cummins did not pay enough attention to reliability, focusing instead on weight and power considerations. As a result, development costs were higher than expected and, once the engine was sold, customers experienced less than expected. A Cummins official reported that the company found itself in an “intolerable” position with customers who were becoming increasingly dissatisfied. This significant event, in which Cummins lost customer confidence, caused the company to examine its product development processes. The result of this examination was an improved product development process that requires a more cross functional and data-based approach to new development programs. The improvements resulted in better analysis and understanding of customer requirements leading to resource allocations before beginning new programs. Cummins invested in both customer satisfaction and the development and support of its products. This investment provided the motivation to adopt a more disciplined product development approach for the production of high-quality products for its customers. Intelsat officials told us it makes progress payments to its manufacturers throughout development and production. However, the company holds about 10 to 20 percent of the contract value to award to the manufacturer after a satellite is successfully launched. According to company officials, the 10 to 20 percent is paid to the manufacturer over the expected life of the satellite, which is typically 15 years, when the satellite performs as expected. The commercial companies also all capture information about their manufacturing processes and key suppliers’ quality. However, unlike DOD, they use the information when making purchasing decisions and determining how best to structure contracts to incentivize good quality outcomes. For example, in some cases Intelsat does not allow manufacturers to use certain suppliers whose parts do not meet specified reliability goals. In addition, Intelsat may include clauses in its contracts that require a manufacturer to conduct periodic inspections of particular suppliers. DOD has long recognized its acquisition problems and has initiated numerous improvement efforts over the years to address them. A recent set of initiatives are highlighted by the Under Secretary of Defense for Acquisition, Technology and Logistics in DOD’s Defense Acquisition Transformation and Program Manager Empowerment and Accountability reports to Congress. Our analysis indicates that while none of the initiatives is aimed solely at improving the quality of DOD weapon systems or improving prime contractor quality practices, they could address some of the problems identified in this report, particularly the ones that improve the DOD requirements-setting process and limit requirements growth during development. A brief description of the initiatives is included below. Concept Decision Reviews: DOD is pilot-testing a concept decision reviews program to provide a better framework for strategic investment decisions. A Concept Decision Committee composed of senior DOD officials is applying the reviews to four pilot programs— the Joint Lightweight Tactical Mobility program, the Integrated Air and Missile Defense program, the Global Strike Raid Scenario, and the Joint Rapid Scenario Generation program. A key aspect of the pilot programs is the early involvement and participation of systems engineering prior to concept decision. DOD expects this to provide decision makers better insight for setting firm requirements early, assessing technology options, considering alternative acquisition strategies, ensuring that new technology will mature in time to meet development and delivery schedules, and delivering systems with predictable performance to the warfighter. Time-Defined Acquisition: Under the time-defined acquisition initiative, DOD plans to use such criteria as technology maturity, time to delivery, and requirement certainty to select the appropriate acquisition approach to provide a needed capability. The department envisions using a different acquisition approach, depending on whether a capability can be fielded in 2 years or less, more than 2 years to less than 4 years, or more than 4 years. In September 2006, the Under Secretary of Defense for Acquisition, Technology and Logistics stated that he anticipated the time-defined acquisition approach would facilitate better overall cost control and more effective use of total available resources. Configuration Steering Boards: In July 2007, the Under Secretary of Defense for Acquisition, Technology and Logistics directed the establishment of Configuration Steering Boards for every current and future acquisition category I program in development. The boards, chaired by the service acquisition executive within each of the military services, are expected to review all requirements changes and significant technical configuration changes that have the potential to adversely affect program cost and schedule. Requirement changes are not to be approved unless funds are identified and schedule impacts are mitigated. However, the Under Secretary stated in his announcement of this initiative that such requirements changes would usually be rejected. Key Performance Parameters/Key System Attributes: DOD has added new guidelines and procedures for establishing weapon system requirements in its Joint Capabilities Integration and Development System manual. The manual now requires that materiel availability be included as a key performance parameter for new weapon system development and that materiel reliability and ownership costs be included as key system attributes. Together, these requirements are aimed at ensuring weapon system sustainment considerations are fully assessed and addressed as part of the systems engineering process. Award and Incentive Fees: DOD recently issued policy memorandums that reflect a change in policy related to the proper use of award and incentive fees. The memorandums emphasize the need to structure award fee contracts in ways that focus DOD and contractor efforts on meeting or exceeding cost, schedule, and performance requirements. The policy memorandums state that award fees should be linked to desired outcomes and payments should be commensurate to contractor performance. It also provides guidelines for how much contractors will be paid for excellent performance, satisfactory performance, and less than satisfactory performance. While these initiatives are not directly linked together, they have the potential to help DOD implement some of the leading commercial practices we have highlighted in the past. In particular, they could help the Under Secretary of Defense for Acquisition, Technology and Logistics ensure that DOD has a better match between warfighter needs and funding at the start of weapon system development and that technology, engineering, and production knowledge is properly considered at that time. They can also help control requirements changes and requirements growth, which can adversely affect system quality during development. The initiatives are still new and, in the case of concept decision reviews, small in scope; therefore, their effectiveness may not be known for some time. DOD has developed policies that address the need for setting achievable requirements, adopting commercial quality standards, using good systems engineering practices, and overseeing supplier quality. However, DOD still has difficulty acquiring high-quality weapon systems in a cost-efficient and timely manner. While many problems are caused by poor prime contractor practices related to systems engineering, manufacturing, and supplier quality, an underlying cause lies in the environment. DOD typically assumes most of the financial risk associated with development of complex systems. However, risks associated with this situation are exacerbated because DOD generally enters into development contracts without demonstrated knowledge or firm assurance that requirements are achievable, which too often result in inefficient programs and quality problems. DOD can learn from leading commercial companies in the way they deal with risk and ensure quality in their products. Because commercial companies invest their own money in product development and recoup that investment when their customers buy the finished good, they put a new product’s requirements to the test with disciplined systems engineering practices before they commit to a large investment to develop it. If a highly valued requirement cannot be demonstrated as achievable through systems engineering, it is deferred to a subsequent product variation or to another program. Moreover, and very importantly, companies do not shortcut essential quality practices that ensure process controls and high supplier quality, including collecting and analyzing quality data. Like commercial companies, DOD must demand appropriate knowledge about requirements and make hard decisions about program risk before it initiates costly investments. Improvements in the way DOD uses existing tools to analyze requirements during development, along with potential results of some of the initiatives it has underway, can help reduce quality risks, and address some of the long-standing acquisition problems it faces. Although the initiatives are new and in the case of the concept decision reviews, small in scope, they are a good first step toward the department setting more realistic requirements and time frames for weapon system development. Additional oversight could help ensure that prime contractors can meet requirements with given resources, such as funding and technologies, prior to DOD entering into a development contract. In addition, continued leadership from the Under Secretary of Defense for Acquisition, Technology and Logistics and a combination of actions from both DOD and prime contractors are needed to make these improvements and get the most from its planned $1.5 trillion investment in new weapons programs. To ensure that the department is taking steps to improve the quality of weapon systems, we recommend that the Secretary of Defense take the following actions related to recent initiatives highlighted in DOD’s Defense Acquisition Transformation and Program Manager Empowerment and Accountability reports to Congress to improve its focus on setting achievable requirements and oversight: As a part of the concept decision review initiative, have contractors perform more detailed systems engineering analysis to develop sound requirements before DOD selects a prime contractor for the systems development contract, which would help ensure that weapon system requirements, including those for reliability, are achievable with given resources. Establish measures to gauge the success of the concept decision reviews, time-defined acquisition, and configuration steering board initiatives and properly support and expand these initiatives where appropriate. To better assess the quality of weapon system programs and prime contractor performance, DOD needs to obtain and analyze more comprehensive data regarding prime contractors and their key suppliers. Therefore, we also recommend that the Secretary of Defense direct the Defense Contract Management Agency and the military services to: Identify and collect data that provides metrics about the effectiveness of prime contractors’ quality management system and processes by weapon system and business area over time and Develop evaluation criteria that would allow DOD to score the performance of prime contractors’ quality management systems based on actual past performance, which could be used to improve quality and better inform DOD acquisition decision makers. DOD provided us with written comments on a draft of this report. DOD partially concurred with each of the recommendations. DOD’s comments appear in appendix III. In its comments, DOD partially concurred with the draft recommendation that, as part of its concept decision review initiative, prime contractors should complete systems engineering analysis prior to entering a development contract. The department stated that the recommendation was vague. DOD noted that it conducts systems engineering planning prior to entering into a development contract and that prime contractors conduct more detailed systems engineering analysis afterwards. Moreover, DOD noted that systems engineering is a continuous government-performed activity at the heart of any structured development process that proceeds from concept to production. The concept decision review initiative, in particular, considers fundamental systems engineering issues such as technology and integration and manufacturing risk before the concept decision review. To address DOD’s concern that our recommendation was too vague, we modified it to add more detail. Specifically, as part of the concept decision review initiative, we recommend that contractors that are competing for the systems development contract provide DOD more detailed systems engineering requirements analysis to be completed before a systems development contract is awarded. This would help ensure that requirements are clear and reasonable before DOD enters into a development contract. We understand that currently DOD conducts systems engineering planning prior to entering a development contract with prime contractors and that prime contractors conduct a more thorough systems engineering analysis afterwards. However, because our work has found that many DOD systems development efforts have been hampered by poorly defined or poorly understood requirements, we believe that DOD should test, through the concept decision initiative, paying contractors to complete a more thorough systems engineering analysis prior to entering into a development contract. This would give the department the benefit of more knowledge when finalizing requirements and provide an opportunity for DOD to set requirements that can be met in a well-defined time frame, which could reduce the department’s risk exposure in cost reimbursement contracts used for development. In addition, it would better position DOD to place more accountability on the winning contractor to meet the desired requirements within cost and schedule estimates. DOD also partially concurred with the recommendation to establish measures to gauge the success of the concept decision reviews, time- defined acquisition, and configuration steering board initiatives and properly support and expand these initiatives where appropriate. In its response, DOD stated that changes to the concept decision review and time-defined acquisition initiatives are being considered and any changes would be reflected in an update to DOD Instruction 5000.2. DOD also stated that the configuration steering board initiative is being implemented consistent with its policy. We are encouraged by the potential changes that could result from successful implementation of the concept decision reviews, time-defined acquisition, and configuration steering board initiatives. We believe that these three initiatives are aimed at addressing several of DOD’s systemic problems that impact weapon system quality and that the department should not lose sight of these initiatives. While the initiatives are new and untested in practice, acquisition history tells us that these policy changes alone will not be sufficient to change outcomes. We have found that measures to gauge success can help facilitate senior-level oversight that is needed to bring about significant change within an organization. We, therefore, believe this recommendation remains valid. DOD partially concurred with the recommendation for the Defense Contract Management Agency and military services to identify and collect data that provides metrics about the effectiveness of prime contractors’ quality management systems and processes by weapon system and business area over time. In its response, DOD stated that the Defense Contract Management Agency is in the process of identifying and will eventually collect data that could be used to determine the effectiveness of prime contractors’ quality management systems. However, DOD stated that the added expense of capturing data by weapon system and business area does not seem warranted at this time. Further it commented that there is no need for the military services to engage in a similar effort to the Defense Contract Management Agency, since the agency is working in cooperation with the military services. We are encouraged by the Defense Contract Management Agency’s efforts to identify and collect data on prime contractor quality management activities on a broad scale. As we noted in the report, this is a practice used by leading commercial companies we visited. During our review, the agency could only provide data on a weapon system by weapon system basis. We believe that data should be captured on both a weapon system and prime contractor basis and that the added expense of including data by weapon system is likely minimal, given that it is already being collected that way. Considering that DOD plans to invest about $1.5 trillion (in 2007 dollars) in its current portfolio of major weapon systems, we believe it would be valuable for DOD to know how the companies and business units responsible for delivering its high-quality weapon systems are performing as well as the quality associated with individual weapon systems. In addition, we believe the military services, particularly the Navy’s Supervisor of Shipbuilding organization, which is responsible for overseeing contractor activities for shipbuilding, should identify and collect similar data so that information collected is consistent and can be used for comparison purposes. We, therefore, believe this recommendation remains valid. Finally, DOD partially concurred with the recommendation for the Defense Contract Management Agency and military services to develop evaluation criteria that would allow DOD to score the performance of prime contractors’ quality management systems based on actual past performance. DOD stated that it plans to develop evaluation criteria based on data the Defense Contract Management Agency plans to collect in the future. DOD does not think the military services need to develop a parallel effort because Defense Contract Management Agency data will be shared with the military services. It was not our intent for the military services, the Defense Contract Management Agency, and the Navy’s Supervisor of Shipbuilding to have parallel efforts. Rather, we expected that they would work collaboratively on this effort. Moreover, not only do we believe DOD should know how well the prime contractors and their respective programs are performing as noted above, we also believe that DOD should know how well the prime contractors’ quality management systems are working. Again, this is a practice used by leading commercial companies we visited. We are encouraged that the Defense Contract Management Agency plans to develop evaluation criteria that would be used to score prime contractor quality management systems but believe the department should have a consistent methodology to be used across DOD. We, therefore, believe this recommendation remains valid. We are sending copies of this report to the Secretary of Defense and interested congressional committees. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This report compares Department of Defense (DOD) and its large prime contractors’ quality management policies and practices with those of leading commercial companies—with a focus on improving the quality of DOD weapon systems. Specifically, we (1) determined the impact of quality problems on selected DOD weapon systems and defense contractors’ practices that contributed to the problems, (2) identified practices used by leading commercial companies that can be used to improve the quality of DOD weapon systems, (3) identified problems DOD faces in terms of improving quality, and (4) identified recent DOD initiatives that could improve quality. To determine the impact of quality problems on selected DOD weapon systems and defense contractors’ practices that contribute to the problems, we selected and reviewed 11 DOD weapon systems with known deficiencies from each of the military services and identified the quality problems associated with each deficiency. The 11 were chosen to demonstrate the types of problems DOD weapon systems experience and to help focus our discussions with leading commercial companies on aspects of development that caused DOD major quality problems. The prime contractors in charge of developing these systems include six of DOD’s largest contractors; together, they are involved with a little over $1 trillion, or about 76 percent, of the $1.5 trillion (in 2006 dollars) DOD plans to spend on weapon systems in its current portfolio. Systems we reviewed, along with the prime contractors responsible for developing the systems, are: Advanced SEAL Delivery System, a battery-powered submarine funded by the Special Operations Command and developed by Northrop Grumman; Advanced Threat Infrared Countermeasure/Common Missile Warning System, a defense countermeasure system for protection against infrared guided missiles in flight funded primarily by the Army and developed by BAE Systems; Expeditionary Fighting Vehicle, an amphibious and armored tracked vehicle funded by the Navy for the Marine Corps and developed by General Dynamics; F-22A, an air superiority fighter with an air-to-ground attack capability funded by the Air Force and developed by Lockheed Martin; Global Hawk, a high-altitude, long endurance unmanned aircraft funded by the Air Force and developed by Northrop Grumman; Joint Air-to-Surface Standoff Missile, an air-to-surface missile funded by the Air Force and developed by Lockheed Martin; LPD 17, an amphibious transport ship funded by the Navy and developed by Northrop Grumman; MH-60S, a fleet combat support helicopter funded by the Navy and developed by Sikorsky Aircraft; Patriot Advanced Capability-3, a long-range high-to-medium altitude missile system funded by the Army and developed by Lockheed Martin; V-22, a tilt rotor, vertical/short take-off and landing aircraft funded primarily by the Navy for the Marine Corps and developed jointly by Bell Helicopter Textron and Boeing Integrated Defense Systems; and Wideband Global SATCOM, a communications satellite funded by the Air Force and developed by Boeing Integrated Defense Systems. To evaluate each of the 11 DOD weapon systems, we examined program documentation, such as deficiency reports and corrective action reports, and held discussions with quality officials from DOD program offices, the prime contractor program office, and either the Defense Contract Management Agency or the Supervisor of Shipbuilding office where appropriate. Based on information gathered through documentation and discussions, we grouped the problems into three general categories: systems engineering, manufacturing, and supplier quality. When possible, we identified the impact that quality problems had on system cost, schedule, performance, reliability, availability, or safety. After completing our weapon systems reviews, we held meetings with senior quality leaders at selected prime contractors included in our review to discuss the quality problems we found and to obtain their views on why the problems occurred. To identify practices used by leading commercial companies that can be used to improve the quality of DOD weapon systems, we selected and visited five companies based on several criteria: companies that make products similar to DOD weapon systems in terms of complexity; companies that have been recognized in quality management literature or by quality-related associations/research centers for their high-quality products; companies that have won quality-related awards; and/or companies that have close relationships with customers when developing and producing products. We met with these companies to discuss their product development and manufacturing processes, supplier quality activities, and the quality of selected products made by these companies. Much of the information we obtained from these companies is anecdotal, due to the proprietary nature of the data that could affect their competitive standing. Several of the companies provided data on specific products, which they agreed to let us include in this report. The companies we visited and the products we discussed include: Boeing Commercial Airplanes, a leading aerospace company and a manufacturer of commercial jetliners. We met with quality officials in Seattle, Washington, and discussed the quality practices associated with the company’s short-to-medium range 737 and extended range 777 aircraft, as well as its new 787 aircraft. Cummins Inc., a manufacturer of diesel and natural gas-powered engines for on-highway and off-highway use. We met with quality officials at their company’s headquarters location in Columbus, Indiana, and discussed the development and quality of the ISX, a heavy-duty engine. Kenworth Truck Company, a division of PACCAR Inc. and a leading manufacturer of heavy- and medium-duty trucks. We met with quality officials at its manufacturing plant in Chillicothe, Ohio, which was named Quality Magazine’s 2006 Large Plant of the Year, to discuss the development and quality of various large trucks. Siemens Medical Solutions, a business area within Siemens AG, which is a global producer of numerous products, including electronics, electrical equipment, and medical devices. We met with quality officials at a company facility located in Mountain View, California, and discussed the division’s quality practices for developing and manufacturing ultrasound systems such as the Sequoia ultrasound system. Space Systems/Loral, one of the world’s premier designers, manufacturers, and integrators of geostationary satellites and satellite systems. We met with quality officials at the company’s headquarters in Palo Alto, California, and discussed the company’s quality practices for developing satellites such as the Intelsat IX series and iPSTAR satellites. To identify problems that DOD must overcome to improve the quality of weapon systems, we reviewed processes and tools DOD can use to influence weapon system quality. These include setting requirements, participating in key decisions during weapon system development and production, using contracts to incentivize good quality, and overseeing weapon system quality and prime contractor performance. We examined these processes and tools for the 11 weapons programs we reviewed and discussed the use of these processes and tools with acquisition and quality officials from the Office of the Secretary of Defense, military services, prime contractors, Defense Contract Management Agency, and Supervisor of Shipbuilding. We also relied on previous GAO best practices and weapon system reports to identify DOD actions that contributed to poor quality outcomes. A comprehensive list of reports we considered throughout our review can be found in the related products section at the end of this report. We met with officials at two commercial companies that purchase products manufactured by two of the leading commercial manufacturers we included in this review. These companies included: American Airlines, the largest scheduled passenger airline in the world, which has purchased aircraft from Boeing Commercial Airplanes. We met with quality officials at a major maintenance facility located in Tulsa, Oklahoma. Intelsat, a leading provider of fixed satellite services for telecommunications, Internet, and media broadcast companies, which purchases satellites from all major satellite manufacturers in the United States and Europe. We met with officials in space systems acquisition and planning at the company’s headquarters located in Washington, D.C. Our discussions focused on (1) the companies’ roles in establishing requirements; (2) the types of contracts they award to manufacturers and the specificity included in the contracts in terms of quality, reliability, and penalties; and (3) the amount of oversight they exercise over their suppliers’ development and manufacturing activities. To identify recent DOD initiatives that could improve weapon system quality, we reviewed DOD’s formal response to Sections 804 and 853 of the John Warner National Defense Authorization Act for Fiscal Year 2007. This act requires DOD to report to the congressional defense committees on acquisition reform and program management initiatives. We also met with senior defense leaders to discuss the implementation status of the acquisition reform initiatives identified in DOD’s February 2007 and September 2007 reports to the committees and relied on a previous GAO report for the implementation status of planned program management improvements. We conducted this performance audit from September 2006 to December 2007 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix summarizes the quality problems experienced by the 11 DOD weapon systems we reviewed. The problems are categorized as systems engineering, manufacturing, and/or supplier quality problems. Most of the programs had problems in more than one of these categories. These summaries do not address all quality problems experienced on the programs; rather they emphasize major problems we discussed with officials from the military services, prime contractors, and the Defense Contract Management Agency. When possible, we include the direct impact the quality problems had on the program, the corrective actions the prime contractor or DOD took to address the problems, and the change in cost estimates and quantities from the start of program development to the present. The cost estimates were taken from DOD Selected Acquisition Reports or were program office estimates and include DOD’s research, development, test and evaluation (RDT&E) and procurement expenditures on a particular program. We did not break out the portion of these funds that were paid to prime contractors versus the amount paid to suppliers. In addition, the change in cost estimates can be the result of a number of factors, including the amount paid to fix quality problems, a decision to procure more weapons, and increased labor rates or material prices. Michael Sullivan (202) 512-4841 or [email protected]. Key contributors to this report were Jim Fuquay, Assistant Director; Cheryl Andrew; Lily Chin; Julie Hadley; Lauren Heft; Laura Jezewski; Andrew Redd; Charlie Shivers, and Alyssa Weir. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Best Practices: Stronger Practices Needed to Improve DOD Technology Transition Processes. GAO-06-883. Washington, D.C.: September 14, 2006. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 1, 2005. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington, D.C.: November 15, 2005. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Defense Acquisitions: Factors Affecting Outcomes of Advanced Concept Technology Demonstration. GAO-03-52. Washington, D.C.: December 2, 2002. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Programs Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisitions: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. | A Senate report related to the National Defense Authorization Act for Fiscal Year 2007 asked GAO to compare quality management practices used by the Department of Defense (DOD) and its contractors to those used by leading commercial companies and make suggestions for improvement. To do this, GAO (1) determined the impact of quality problems on selected weapon systems and prime contractor practices that contributed to the problems; (2) identified commercial practices that can be used to improve DOD weapon systems; (3) identified problems that DOD must overcome; and (4) identified recent DOD initiatives that could improve quality. GAO examined 11 DOD weapon systems with known quality problems and met with quality officials from DOD, defense prime contractors, and five leading commercial companies that produce complex products and/or are recognized for quality products. Problems related to quality have resulted in major impacts to the 11 DOD weapon systems GAO reviewed--billions in cost overruns, years-long delays, and decreased capabilities for the warfighter. For example, quality problems with the Expeditionary Fighting Vehicle program were so significant that DOD extended development 4 years at a cost of $750 million. The F-22A fighter aircraft experienced cracks in the plane's canopy that grounded the flight test aircraft, and initial operating capability for the Wideband Global SATCOM satellite was delayed 18 months because a supplier installed some fasteners incorrectly. GAO's analysis of 11 DOD weapon systems illustrates that defense contractors' poor practices for systems engineering activities as well as manufacturing and supplier quality problems contributed to these outcomes. Reliance on immature designs, inadequate testing, defective parts, and inadequate manufacturing controls are some of the quality problems that GAO found. Senior prime contractor officials GAO met with generally agreed with GAO's assessment of the causes of the quality problems. In contrast, leading commercial companies GAO contacted use more disciplined systems engineering, manufacturing, and supplier quality practices. For example, rather than wait to discover defects after the fact, Boeing Commercial Airplanes tries to design parts that can be assembled only one way. Effective use of many systems engineering practices has helped Space Systems/Loral, a satellite producer, improve overall quality, for example, by allowing the company to operate its satellites for more than 80 million consecutive hours in orbit with just one failure. Companies also put significant effort into validating product design and production processes to catch problems early on, when problems are less costly to fix. They conduct regular audits of their suppliers and hold them accountable for quality problems. DOD faces its own set of challenges--setting achievable requirements for systems development and providing effective oversight during the development process. In conducting systems development, DOD generally pays the allowable costs incurred for the contractor's best efforts. These conditions contribute to an acquisition environment that is not conducive for incentivizing contractors to build high-quality weapon systems and DOD, which typically uses cost-reimbursement contracts to develop weapon systems, assumes most of the risks and pays contractors to fix most of the problems. DOD has taken steps to improve its acquisition practices by experimenting with a new concept decision review practice, selecting different acquisition approaches according to expected fielding times, and establishing panels to review weapon system configuration changes that could adversely affect program cost and schedule. None of these initiatives focus exclusively on quality issues, and none specifically address problems with defense contractors' practices. |
To determine what factors contributed to the failure of ASPR’s procurement effort with VaxGen, we interviewed officials from HHS’s components—ASPR, NIAID, the Food and Drug Administration (FDA), and the Centers for Disease Control and Prevention (CDC). In addition, we reviewed documents these agencies provided. We visited and interviewed the officials of the two companies—Avecia and VaxGen—that NIAID contracted with to develop the new rPA anthrax vaccine. We also talked to officials of several biotech companies that are currently working on biodefense medical countermeasures. We consulted with a small group of experts in the manufacturing of biodefense vaccines to ensure that our assessments were accurate. Finally, we reviewed scientific literature on vaccine development, manufacturing, and safety and efficacy, including regulatory requirements for licensing. To identify issues associated with using the licensed anthrax vaccine (BioThrax) in the stockpile, we interviewed officials from ASPR, CDC, and DOD. In addition, we reviewed documents these agencies provided and analyzed data on stockpile inventory of the licensed anthrax vaccine. We visited and interviewed officials from Emergent Biosolutions, the company that manufactures the licensed anthrax vaccine. We also talked to officials of several biotech companies that are currently working on biodefense medical countermeasures to obtain their views on ways to minimize waste in the stockpile. We conducted our review from June 2007 through August 2007 in accordance with generally accepted government auditing standards. Following the anthrax attacks of 2001, the federal government determined that it would need additional medical countermeasures (for example, pharmaceuticals, vaccines, diagnostics, and other treatments) to respond to an attack involving chemical, biological, radiological, or nuclear (CBRN) agents. The Project BioShield Act of 2004 (Public Law 108-276) was designed to encourage private companies to develop civilian medical countermeasures by guaranteeing a market for successfully developed countermeasures. The Project BioShield Act (1) relaxes some procedures for bioterrorism- related procurement, hiring, and research grant awarding; (2) allows for the emergency use of countermeasures not approved by FDA; and (3) authorizes 10-year funding (available through fiscal year 2013) to encourage the development and production of new countermeasures for CBRN agents. The act also authorizes HHS to procure these countermeasures for the Strategic National Stockpile. Project BioShield is a procurement program that allows the government to enter into contracts to procure countermeasures while they still are in development, up to 8 years before product licensure is expected. Under this program, the government agrees to buy a certain quantity of successfully developed countermeasures for the Strategic National Stockpile at a specified price once the countermeasure meets specific requirements. The government pays the agreed-upon amount only after these requirements are met and the product is delivered to the Strategic National Stockpile. If the product does not meet the requirements within the specified time frame, the contract can be terminated without any payment to the contractor. Thus, while Project BioShield reduces the producer’s market risk—that is, the possibility that no customer will buy the successfully developed product—it does not reduce the development risk to the producer—that is, the possibility that the countermeasure will fail during development. In December 2006, the Pandemic and All-Hazards Preparedness Act (Public Law 109-417) modified the Project BioShield Act to allow for milestone-based payments before countermeasure delivery for up to half of the total award. Within HHS, the Biomedical Advanced Research and Development Authority (BARDA) has the authority to directly fund the advanced development of countermeasures that are not eligible for Project BioShield contracts. Project BioShield procurement involves actions by the Department of Homeland Security (DHS), HHS (including ASPR, NIAID, FDA, and CDC), and an interagency working group. The first step in the Project BioShield acquisition process is to determine whether a particular CBRN agent poses a material threat to national security. DHS performs this analysis, which is generally referred to as a population threat assessment (PTA). On the basis of this assessment, the DHS Secretary determines whether that agent poses a material threat to national security. The Project BioShield Act of 2004 requires such a written PTA for procurements using BioShield funds and authorities. This declaration neither addresses the relative risk posed by an agent nor determines the priority for acquisition, which is solely determined by ASPR. Furthermore, the issuance of a PTA does not guarantee that the government will pursue countermeasures against that agent. DHS has issued PTAs for 13 agents, including the biological agents that cause anthrax; multi-drug-resistant anthrax; botulism; glanders; meliodosis; tularemia; typhus; smallpox; plague; and the hemorrhagic fevers Ebola, Marburg, and Junin. Various offices within HHS (ASPR, NIAID, FDA, and CDC) fund the development research, procurement, and storage of medical countermeasures, including vaccines, for the Strategic National Stockpile. ASPR’s role: ASPR is responsible for the entire Project BioShield contracting process, including issuing requests for information and requests for proposals, awarding contracts, managing awarded contracts, and determining whether contractors have met the minimum requirements for payment. ASPR maintains a Web site detailing all Project BioShield solicitations and awards. ASPR has the primary responsibility for engaging with the industry and awarding contracts for large-scale manufacturing of licensable products, including vaccines, for delivery into the Strategic National Stockpile. With authorities recently granted, BARDA will be able to use a variety of funding mechanisms to support the advanced development of medical countermeasures and to award up to 50 percent of the contract as milestone payments before purchased products are delivered. NIAID’s role: NIAID is the lead agency in NIH for early candidate research and development of medical countermeasures for biodefense. NIAID issues grants and awards contracts for research on medical countermeasures exploration and early development, but it has no responsibility for taking research forward into marketable products. FDA’s role: Through its Center for Biologics Evaluation and Research (CBER), FDA licenses many biological products, including vaccines, and the facilities that produce them. Manufacturers are required to comply with current Good Manufacturing Practices regulations, which regulate personnel, buildings, equipment, production controls, records, and other aspects of the vaccine manufacturing process. FDA has also established the Office of Counterterrorism Policy and Planning in the Office of the Commissioner, which issued the draft Guidance on the Emergency Use Authorization of Medical Products in June 2005. This EUA guidance describes in general terms the data that should be submitted to FDA, when available, for unapproved products or unapproved uses of approved products that HHS or another entity wishes FDA to consider for use in the event of a declared emergency. The final EUA guidance was issued in July 2007. CDC’s role: Since 1999, CDC has had the major responsibility for managing and deploying the medical countermeasures stored in the Strategic National Stockpile. The Omnibus Consolidated and Emergency Supplemental Appropriations Act (Public Law 105-277) first provided the stockpile with a fund specially appropriated for purchases. Since then, CDC has maintained this civilian repository of medical countermeasures, such as antibiotics and vaccines. DOD is not currently a part of Project BioShield. Beginning in 1998, DOD had a program to vaccinate all military service members with BioThrax. DOD’s program prevaccinates personnel for deployment to Iraq, Afghanistan, and the Korean peninsula with BioThrax. For other deployments, this vaccination is voluntary. DOD also has a program to order, stockpile, and use the licensed anthrax vaccine. DOD estimates its needs for BioThrax doses and bases its purchases on that estimate. Multiple agencies, including HHS and DHS, provide input on priority- setting and requirements activities. For BioShield purchases, the Secretaries of HHS and DHS prepare a joint recommendation, which requires presidential approval before HHS enters into a procurement contract. The Secretary of HHS currently coordinates the interagency process; the National Science and Technology Council previously handled the coordination. Anthrax is a rare but serious acute infectious disease that must be treated quickly with antibiotics. Anthrax is caused by the spore-forming bacterium Bacillus anthracis. It occurs most commonly in herbivores in agricultural regions that have less effective veterinary and public health programs. Anthrax can infect humans who have been exposed to infected animals or products from infected animals such as hide, hair, or meat. Human anthrax occurs rarely in the United States from these natural causes. However, the anthrax exposures in September and October 2001 through mail intentionally contaminated with anthrax spores resulted in illness in 22 persons and the death of 5. An FDA-licensed anthrax vaccine, BioThrax, has been available since 1970. The vaccine has been recommended for laboratory workers who are involved in the production of cultures of anthrax or who risk repeated exposure to anthrax by, for example, conducting confirmatory or environmental testing for anthrax in the U.S. Laboratory Response Network for Bioterrorism laboratories; persons who may be required to make repeated entries into known Bacillus anthracis contaminated areas after a terrorist attack, such as remediation workers; and persons who work with imported animal hides, furs, or similar materials, if the industry standards and restrictions that help to control the disease are insufficient to prevent exposure to anthrax spores. Preventive anthrax vaccine is not recommended for civilians who do not have an occupational risk. However, in 1998, DOD began a mandatory program to administer the vaccine to all military personnel for protection against possible exposure to anthrax-based biological weapons. By late 2001, roughly 2 million doses of the vaccine had been administered, most of them to U.S. military personnel. As the vaccination program proceeded, some military personnel raised concerns about the safety and efficacy of the vaccine. The BioShield program stockpiled BioThrax for the Strategic National Stockpile for postexposure use in the event of a large number of U.S. civilians being exposed to anthrax. ASPR officials characterized the acquisition of the licensed vaccine as a “stopgap” measure as they also have been engaged in the development and purchase of a new rPA anthrax vaccine. ASPR had already acquired 10 million doses of BioThrax from Emergent BioSolutions by 2006 and recently purchased an additional 10 million doses. Vaccine research and development leading to FDA approval for use is a long and complex process. It may take 15 years and, according to FDA, cost from $500 million to $1.2 billion and require specialized expertise. Vaccines are complex biological products given to a person or animal to stimulate an immune reaction the body can “remember” if it is exposed to the same pathogen later. In contrast to most drugs, they have no simple chemical characterization. As a result, evaluating them involves measuring their effects on living organisms, and their quality can be guaranteed only through a combination of in-process tests, end-product tests, and strict controls of the entire manufacturing process. Vaccines are highly perishable and typically require cold storage to retain potency. Even if they are stored at the recommended temperature, most vaccines have expiration dates beyond which they are considered outdated and should not be used. A great deal of attention is directed to using the vaccine before its expiration date. For example, a recent CDC manual advises users: “Check expiration date on container” and “rotate stock so that the earliest dated material is used first.” After the storage vial has been opened, the vaccine begins to deteriorate quickly in many cases, often necessitating the opened or reconstituted vaccine to be used within minutes to hours or discarded.” Since human challenge studies cannot be conducted for CBRN medical countermeasures, FDA requires animal efficacy data instead. The FDA process for approving a biologic for use in the United States begins with an investigational new drug (IND) application. A sponsor that has developed a candidate vaccine applies to start the FDA oversight process of formal studies, regulated by CBER within FDA. Phase 1 trials involve safety and immunogenicity studies in a small number of healthy volunteer subjects. phase 2 and phase 3 trials gather evidence of the vaccine’s effectiveness in ever larger groups of subjects, providing the documentation of effectiveness and important additional safety data required for licensing. If the data raise safety or effectiveness concerns at any stage of clinical or animal studies, FDA may request additional information or halt ongoing clinical studies. In vaccine development, clinical trials typically last up to 6 years. After they have been successfully completed, the sponsor applies for FDA’s approval to market the product. FDA’s review of the license application includes review of the manufacturing facility and process. According to FDA, this process is typically completed within 10 months for a standard review and 6 months for a priority review. According to industry sources, the challenge in scaling up vaccine production from a research laboratory to a large manufacturing environment while still maintaining quality requires much skill, sophisticated facilities, and a great deal of experience. Three major factors contributed to the failure of the first Project BioShield procurement effort. First, ASPR awarded the first BioShield procurement contract to VaxGen when its product was at a very early stage of development and many critical manufacturing issues had not been addressed. Second, VaxGen took unrealistic risks in accepting the contract terms. Third, key parties did not clearly articulate and understand critical requirements at the outset. ASPR’s decision to launch the VaxGen procurement contract for the rPA anthrax vaccine at an early stage of development, combined with the delivery requirement for 25 million doses within 2 years, did not take the complexity of vaccine development into consideration and was overly aggressive. Citing the urgency involved, ASPR awarded the procurement contract to VaxGen several years before the planned completion of earlier and uncompleted NIAID development contracts with VaxGen and thus preempted critical development work. (For a time line of events for the first rPA anthrax vaccine development and procurement effort, see appendix I). In response to the anthrax attacks of 2001, NIAID was assigned responsibility for developing candidate vaccines leading up to licensure, purchase, and storage in the stockpile. NIAID envisioned a strategy of minimizing risk by awarding contracts to multiple companies to help ensure that at least one development effort would be successful. NIAID’s strategy was appropriate since failure is not uncommon in vaccine development. Toward this end, NIAID designed a sequence of two contracts—one to follow the other—to advance pilot lots of rPA anthrax vaccine through early characterization work, phase 1 and phase 2 clinical trials, accelerated and real-time (long-term) stability testing, and tasks to evaluate the contractor’s ability to manufacture the vaccine in large quantities according to current Good Manufacturing Practices (cGMP). Additionally, these contracts were cost reimbursable, an appropriate contracting mechanism when uncertainties involved in contract performance do not permit cost to be estimated with sufficient accuracy to use a fixed-price contract. VaxGen was one of the awardees. The other awardee was Avecia, Ltd., of Manchester, United Kingdom. NIAID’s development effort with Avecia to prepare a candidate rPA anthrax vaccine for potential purchase for the stockpile is ongoing. VaxGen’s first development contract, awarded in September 2002, had three major requirements: characterize the chemical composition of the pilot lot; conduct phase 1 clinical trials to determine the basic safety profile of the vaccine; and produce a feasibility plan to manufacture, formulate, fill and finish, test, and deliver up to 25 million doses of cGMP vaccine. The initial period of performance for this first contract was 15 months, to be completed in September 2003. However, NIAID twice extended the period of performance to accommodate problems, including stability testing. The final completion date of the contract was December 2006. The second development contract was awarded to VaxGen in September 2003 to continue development of its vaccine. This contract covered 36 months and was scheduled to end in October 2006. Three of the major requirements were to (1) manufacture, formulate, fill, finish, release, and deliver 3 million to 5 million doses of vaccine from at least three different lots that met cGMP requirements; (2) develop, implement, and execute accelerated and real-time stability testing programs to ensure the safety, sterility, potency, and integrity of the vaccine; and (3) conduct phase 2 clinical trials. This second development contract covered especially critical steps in the development cycle. For example, only during the phase 2 trials is the vaccine given to a large enough number of human subjects to further project its safety. Under the contract, phase 2 clinical trials, which were to determine the optimum dose and dosing regimen, were expected to take 2 years to complete. This second contract also covered accelerated and real-time stability testing programs to ensure the safety, sterility, potency, and integrity of the vaccine. Vaccines, especially those intended to be stockpiled, need to exhibit the necessary stability to ensure they will remain safe and potent for the required storage period. In early 2004, VaxGen’s product entered particularly critical stages of development and scale-up production. According to industry officials we talked to, the challenge in scaling up vaccine production from a research pilot lot to a large manufacturing environment while still maintaining quality is not trivial. It requires a great deal of skill, sophisticated facilities, and experience. The officials also stated that work on the vaccine at this point would have been expected to take multiple years to complete, during which time the contractor would work back and forth with FDA in evaluating, testing, and then reworking both its product and manufacturing capability against criteria for eventual licensure. However, on November 4, 2004, a little more than a year after NIAID awarded VaxGen its second development contract, ASPR awarded the procurement contract to VaxGen for 75 million doses of its rPA anthrax vaccine. At that time, VaxGen was still at least a year away from completing the Phase 2 clinical trials under the second NIAID development contract. Moreover, VaxGen was still finishing up work on the original stability testing required under the first development contract. ASPR officials at the time of the award had no objective criteria, such as Technology Readiness Levels (TRL), to assess product maturity. They were, however, optimistic the procurement contract would be successful. One official described its chances of success at 80 percent to 90 percent. However, a key official at VaxGen told us at the same time that VaxGen estimated the chances of success at 10 percent to 15 percent. ASPR now estimates that prior to award, the rPA vaccine was at a TRL rating of 8. According to industry experts, a candidate vaccine product at such a level is generally expected to be 5-8 years away from completion and to have only a 30 percent chance of development into a successful vaccine. When we asked ASPR officials why they awarded the procurement contract when they did, they pointed to a sense of urgency at that time and the difficulties in deciding when to launch procurement contracts. However, November 2004 was 3 years after the anthrax attacks in 2001, and while the sense of urgency was still important, it could have been tempered with realistic expectations. According to industry experts, preempting the development contract 2 years before completing work— almost half its scheduled milestones—was questionable, especially for vaccine development work, which is known to be susceptible to technical issues even in late stages of development. NIAID officials also told us that, in their opinions, it was too early for a BioShield purchase. At a minimum, the time extensions for NIAID’s first development contract with VaxGen to accommodate stability testing should have indicated to ASPR that development on its candidate vaccine was far from complete. After ASPR awarded VaxGen the procurement contract, NIAID canceled several milestones under its development contract with VaxGen to free up funds for earlier milestones that VaxGen was having trouble meeting. However, this undermined VaxGen’s ability to refine product development up to the level needed to ensure delivery within the 2-year time frame required under the procurement contract. VaxGen officials told us that they understood their chances for success were limited and that the contract terms posed significant risks. These risks arose from aggressive time lines, VaxGen’s limitations with regard to in-house technical expertise in stability and vaccine formulation—a condition exacerbated by the attrition of key staff from the company as the contract progressed—and its limited options for securing additional funding should the need arise. Industry experts told us that a 2-year time line to deliver 75 million filled and finished doses of a vaccine from a starting point just after phase 1 trials is a near-impossible task for any company. VaxGen officials told us that at the time of the procurement award they knew the probability of success was very low, but they were counting on ASPR’s willingness to be flexible with the contract time line and work with them to achieve success. In fact, in May 2006, ASPR did extend the contract deadlines to initiate delivery to the stockpile an additional 2 years. However, on November 3, 2006, FDA imposed a clinical hold on VaxGen’s forthcoming phase 2 trial after determining that data submitted by VaxGen were insufficient to ensure that the product would be stable enough to resume clinical testing. By that time, ASPR had lost faith in VaxGen’s technical ability to solve its stability problems in any reasonable time frame. When VaxGen failed to meet a critical performance milestone of initiating the next clinical trial, ASPR terminated the contract. According to VaxGen’s officials, throughout the two development contracts and the Project BioShield procurement contract, VaxGen’s staff peaked at only 120, and the company was consistently unable to marshal sufficient technical expertise. While it is not known how a larger pharmaceutical company might have fared under similar time constraints, we believe more established pharmaceutical companies have staff and resources better able to handle the inevitable problems that arise in vaccine development and licensure efforts. For example, according to industry experts, a large firm might be able to leverage an entire internal department to reformulate a vaccine or pursue solutions to a stability issue, while a smaller biotechnology company like VaxGen would likely be unable to use more than a few full-time scientists. In such situations, the smaller company might have to contract out for the necessary support, provided it can be found within a suitable time frame. External expertise that might have helped VaxGen better understand its stability issue was never applied. At one point during the development contracts, NIAID—realizing VaxGen had a stability problem with its product—convened a panel of technical experts in Washington, D.C. NIAID officials told us that at the time of the panel meeting, they offered to fund technical experts to work with the company, but VaxGen opted not to accept the offer. Conversely, VaxGen officials reported to us that at the time NIAID convened the panel of experts, NIAID declined to fund the work recommended by the expert panel. The lack of available technical expertise was exacerbated when key staff at the company began leaving. A senior VaxGen official described the attrition problem as “massive.” Of special significance, VaxGen’s Senior Vice President for Research and Development and Chief Scientific Officer left during critical phase 2 trials. An official at VaxGen described this person’s role as key in both development of the assays and reformulation of the vaccine. Finally, VaxGen accepted the procurement contract terms even though the financial constraints imposed by the BioShield Act limited its options for securing any additional funding needed. In accordance with this act, payment was conditional on delivery of a product to the stockpile, and little provision could be made, contractually, to support any unanticipated or additional development needed—for example, to work through issues of stability or reformulation. Both problems are frequently encountered throughout the developmental life of a vaccine. This meant that the contractor would pay for any development work needed on the vaccine. VaxGen, as a small biotechnology company, had limited internal financial resources and was dependent on being able to attract investor capital for any major influx of funds. In such a firm, fixed-price contractual arrangement, the contractor assumes most of the risk because the price is not subject to any adjustment based on the contractor’s cost experience. Thus, even if the contractor costs go up, the delivery price does not. We believe these contracts are appropriate in situations where there are no performance uncertainties or the uncertainties can be identified and reasonable estimates of their cost impact can be made, but this was not the situation in the VaxGen procurement contract. VaxGen had to be willing to accept the firm, fixed-price contract and assume the risks involved. VaxGen did so even though it understood that development on its rPA vaccine was far from complete when the procurement contract was awarded and that the contract posed significant inherent risks. Important requirements regarding the data and testing required for the rPA anthrax vaccine to be eligible for use in an emergency were not known at the outset of the procurement contract. They were defined in 2005 when FDA introduced new general guidance on EUA. In addition, ASPR’s anticipated use of the rPA anthrax vaccine was not articulated to all parties clearly enough and evolved over time. Finally, purchase of BioThrax raised the requirement for use of the VaxGen rPA vaccine. All of these factors created confusion over the acceptance criteria for VaxGen’s product and significantly diminished VaxGen’s ability to meet contract time lines. Criteria for product acceptance need to be clearly articulated and understood by all parties before committing to a major procurement. Terms of art that leave critical requirements unclear are problematic in contract language. After VaxGen received its procurement contract, draft guidance was issued that addressed the eventual use of any unlicensed product in the stockpile. This created confusion over the criteria against which VaxGen’s product would be evaluated, strained relations between the company and the government, and caused a considerable amount of turmoil within the company as it scrambled for additional resources to cover unplanned testing. In June 2005, FDA issued draft EUA guidance, which described for the first time the general criteria that FDA would use to determine the suitability of a product for use in an emergency. This was 7 months after the award of the procurement contract to VaxGen and 14 months after the due date for bids on that contract. Since the request for proposal for the procurement contract was issued and the award itself was made before the EUA guidance was issued, neither could take the EUA requirements into consideration. The procurement contract wording stated that in an emergency, the rPA anthrax vaccine was to be “administered under a ‘Contingency Use’ Investigational New Drug (IND) protocol” and that vaccine acceptance into the stockpile is dependent on the accumulation and submission of the appropriate data to support the “use of the product (under IND) in a postexposure situation.” FDA officials told us they do not use the phrase “contingency use” under IND protocols. When we asked ASPR officials about the requirements for use defined in the contract, they said that the contract specifications were consistent with the statute and the needs of the stockpile. They said their contract used “a term of art” for BioShield products. That is, the contractor had to deliver a “usable product” under FDA guidelines. The product could be delivered to the stockpile only if sufficient data were available to support emergency use. ASPR officials told us that FDA would define “sufficient data” and the testing hurdles a product needed to overcome to be considered a “usable product.” While VaxGen and FDA had monthly communication, according to FDA, data requirements for emergency use were not discussed until December 2005, when VaxGen asked FDA what data would be needed for emergency use. In January 2006, FDA informed VaxGen, under its recently issued draft EUA guidance, of the data FDA would require from VaxGen for its product to be eligible for consideration for use in an emergency. The draft guidance described in general FDA’s current thinking concerning what FDA considered sufficient data and the testing needed for a product to be considered for authorization in certain emergencies. Because the EUA guidance is intended to create a more feasible protocol for using an unapproved product in a mass emergency than the term “contingency use under an IND protocol” that ASPR used in the procurement contract, it may require more stringent data for safety and efficacy. Under an IND protocol, written, informed consent must be received before administering the vaccine to any person, and reporting requirements identical to those in a human clinical trial are required. The EUA guidance—as directed by the BioShield law—eased both informed consent and reporting requirements. This makes sense in terms of the logistics of administering vaccine to millions of people in the large-scale, postexposure scenarios envisioned. Because EUA guidance defines a less stringent requirement for the government to use the product, it correspondingly may require more testing and clinical trial work than was anticipated under contingency use. Several of the agencies and companies involved in BioShield-related work have told us the EUA guidance appears to require a product to be further along the development path to licensure than the previous contingency use protocols would indicate. VaxGen officials told us that if the draft EUA guidance was the measure of success, then VaxGen estimated significant additional resources would be needed to complete testing to accommodate the expectations under this new guidance. NIAID told us that the EUA guidance described a product considerably further along the path to licensure (85 percent to 90 percent) than it had assumed for a Project BioShield medical countermeasure (30 percent) when it initially awarded the development contracts. FDA considers a vaccine’s concept of use important information to gauge the data and testing needed to ensure the product’s safety and efficacy. Under the EUA statute, FDA must determine on the basis of the specific facts presented whether it is necessary and appropriate to authorize use of a specific product in an emergency. According to FDA, data and testing requirements to support a product’s use in an emergency context may vary depending on many factors, including the number of people to whom the product is expected to be administered. The current use of an unlicensed product involves the assessment of potential risks and benefits from use of an unapproved drug in a very small number of people who are in a potentially life-threatening situation. In such situations, because of the very significant potential for benefit, safety and efficacy data needed to make the risk benefit assessment might be lower than in an emergency situation where an unlicensed vaccine might be offered to millions of healthy people. This distinction is critical for any manufacturer of a product intended for use in such scenarios—it defines the level of data and testing required. Product development plans and schedules rest on these requirements. In late 2005, as VaxGen was preparing for the second phase 2 trial and well into its period of performance under the procurement contract, its officials participated in meetings, primarily with FDA but also with ASPR and NIAID representatives, to receive FDA comments on its product development plans and responses to specific requests for regulatory advice. VaxGen needed to have a clear understanding of FDA’s data and testing requirements for the rPA vaccine for the upcoming phase 2 trial to be able to plan for and implement the necessary clinical and nonclinical work to generate that data. Without it, VaxGen did not have adequate means to determine how far along it was toward meeting FDA’s requirements. However, in these meetings, it became clear that FDA and the other parties had different expectations for the next phase 2 trial. FDA officials concluded from the discussion that VaxGen, ASPR, and CDC anticipated the next phase 2 trial to produce meaningful safety and efficacy data to support use of the vaccine in a contingency protocol under IND. However, FDA officials stated that this was a new idea to the agency. From FDA’s perspective, the purpose of phase 2 trials was to place the product and sponsor (VaxGen) in the best position possible to design and conduct a pivotal phase 3 trial in support of licensure. The lack of a definition of concept of use caused FDA to delay replying to VaxGen until it could confer with ASPR and CDC to clarify this issue. Thus, we conclude that neither VaxGen nor FDA understood the rPA anthrax vaccine concept of use until this meeting. The introduction of BioThrax into the stockpile undermined the criticality of getting an rPA vaccine into the stockpile and, at least in VaxGen’s opinion, forced FDA to hold it to a higher standard that the company had neither the plans nor the resources to achieve. ASPR purchased 10 million doses of BioThrax in 2005 and 2006 as a stopgap measure for post- exposure situations. After discussions between VaxGen and FDA, VaxGen concluded that this raised the bar for its rPA vaccine. Although BioThrax is currently licensed for use in pre-exposure, and not postexposure, scenarios, the draft EUA guidance states that FDA will evaluate each EUA candidate’s safety and efficacy profile. The EUA guidance states that FDA will “authorize” an unapproved or unlicensed product—such as the rPA anthrax vaccine candidate—only if “there is no adequate, approved and available alternative.” According to the minutes of the meeting between FDA and VaxGen, in January 2006, FDA reported that the unlicensed rPA anthrax vaccine would be used in an emergency after the stockpiled BioThrax, that is, “when all of the currently licensed had been deployed.” This diminished the likelihood of a scenario where the rPA vaccine might be expected to be used out of the stockpile. We identified two issues related to using the BioThrax in the Strategic National Stockpile. First, ASPR lacks an effective strategy to minimize waste. As a consequence, based on current inventory, over $100 million is likely to be wasted annually, beginning in 2008. Three lots of BioThrax vaccine in the stockpile have already expired, resulting in losses of over $12 million. According to the data provided by CDC, 28 lots of BioThrax vaccine will expire in calendar year 2008. ASPR paid approximately $123 million for these lots. For calendar year 2009, 25 additional lots—valued at about $106 million—will reach their expiration dates. ASPR could minimize the potential waste of these lots by developing a single inventory system with DOD—which uses large quantities of the BioThrax vaccine— with rotation based on a first-in, first-out principle. Because DOD is a high-volume user of the BioThrax vaccine, ASPR could arrange for DOD to draw vaccine from lots long before their expiration dates. These lots could then be replenished with fresh vaccine from the manufacturer. DOD, ASPR, industry experts, and Emergent BioSolutions (the manufacturer of BioThrax) agree that rotation on a first-in, first-out basis would minimize waste. DOD and ASPR officials told us that they discussed a rotation option in 2004 but identified several obstacles. In July 2007, DOD officials believed they might not be able to transfer funds to ASPR if DOD purchases BioThrax from ASPR. However, in response to our draft report, DOD informed us that funding is not an issue. However, ASPR continues to believe that transfer of funds would be a problem. DOD stated smallpox vaccine (Dryvax) procurement from HHS is executed under such an arrangement. Further, DOD and ASPR officials told us that they use different authorities to indemnify the manufacturer against any losses or problems that may arise from use of the vaccine. According to DOD, this area may require legislative action to ensure that vaccine purchased by ASPR can be used in the DOD immunization program. Finally, since DOD vaccinates its troops at various locations around the world, there may be logistical distribution issues. A DOD official acknowledged that these issues could be resolved. Second, ASPR plans to use expired vaccine from the stockpile, which violates FDA’s current rules. Data provided by CDC indicated that two lots of BioThrax vaccine expired in December 2006 and one in January 2007. CDC officials stated that their policy is to dispose of expired lots since they cannot be used and continuing storage results in administrative costs. FDA rules prohibit the use of expired vaccine. Nevertheless, according to CDC officials, ASPR told CDC not to dispose of the three lots of expired BioThrax vaccine. ASPR officials told us that ASPR’s decision was based on the possible need to use these lots in an emergency. ASPR’s planned use of expired vaccine would violate FDA’s current rules and could undermine public confidence because ASPR would be unable to guarantee the potency of the vaccine. The termination of the first major procurement contract for rPA anthrax vaccine raised important questions regarding the approach taken to develop a new anthrax vaccine and a robust and sustainable biodefense medical countermeasure industry by bringing pharmaceutical and biotechnology firms to form a partnership with the government. With the termination of the contract, the government does not have a new, improved anthrax vaccine for the public, and the rest of the biotech industry is now questioning whether the government can clearly define its requirements for future procurement contracts. Since HHS components have not completed a formal lessons-learned exercise after terminating VaxGen’s development and procurement contracts, these components may repeat the same mistakes in the future in the absence of a corrective plan. Articulating concepts of use and all critical requirements clearly at the outset for all future medical countermeasures would help the HHS components involved in the anthrax procurement process to avoid past mistakes. If this is not done, the government risks the future interest and participation of the biotechnology industry. Given that the amount of money appropriated to procure medical countermeasures for the stockpile is limited, it is imperative that ASPR develop effective strategies to minimize waste. Since vaccines are perishable commodities that should not be used after their expiration dates, finding other users for the stockpile products before they expire would minimize waste. Because DOD requires a large amount of the BioThrax vaccine on an annual basis, it could use a significant portion of BioThrax in the stockpile before it expires. To avoid repeating the mistakes that led to the failure of the first rPA procurement effort, we recommend that the Secretary of HHS direct ASPR, NIAID, FDA, and CDC to ensure that the concept of use and all critical requirements are clearly articulated at the outset for any future medical countermeasure procurement. To ensure public confidence and comply with FDA’s current rules, we recommend that the Secretary of HHS direct ASPR to destroy the expired BioThrax vaccine in the stockpile. To minimize waste of the BioThrax vaccine in the stockpile, we recommend that the Secretaries of HHS and DOD develop a single integrated inventory system for the licensed anthrax vaccine, with rotation based on a first-in, first-out principle. We provided a draft of this report to the Department of Health and Human Services and the Department of Defense for review and comment. HHS and DOD provided written comments on our draft, which are reprinted in appendixes II and III, respectively. Both agencies also provided technical comments, which we have addressed in the report text as appropriate. HHS and DOD generally concurred with our recommendations. However, with regard to our recommendation on an integrated stockpile, they identified funding and legal challenges to developing an integrated inventory system for BioThrax in the stockpile, which may require legislative action. Although HHS and DOD use different authorities to address BioThrax liability and funding issues, both authorities could apply to either DOD or HHS; consequently, indemnity does not appear to be an insurmountable obstacle for future procurements. HHS also disagreed with a number of our specific findings. We have addressed these areas of disagreement in detailed comments in appendix II. We are sending copies of this report the Secretary of the Department of Defense and the Secretary of the Department of Health and Human Services. We are also sending a copy of this report to other interested congressional members and committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report or would like additional information, please contact me at (202) 512-6412 or [email protected], or Sushil K. Sharma, Ph.D., Dr.PH, at (202) 512-3460 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report included Noah Bleicher, William Carrigg, Barbara Chapman, Crystal Jones, Jeff McDermott, and Linda Sellevaag. National Institute of Allergy and Infectious Diseases (NIAID) issues first rPA anthrax vaccine request for proposal (RFP). NIAID awards rPA contracts to Avecia and VaxGen for first RFP. NIAID issues second rPA anthrax vaccine RFP. Health and Human Services (HHS) issues request for information (RFI) for large-scale manufacturing capabilities for next generation anthrax vaccines. NIAID awards Avecia and VaxGen contracts for second rPA RFP. HHS issues Strategic National Stockpile rPA anthrax vaccine RFP. President George W. Bush signs Project BioShield into law. HHS awards Strategic National Stockpile contract to VaxGen for rPA anthrax vaccine procurement. HHS awards Emergent Strategic National Stockpile contract for 5 million doses of BioThrax Vaccine. Food and Drug Administration (FDA) issues draft Guidance for Emergency Use Authorization of Medical Products. NIAID issues RFP for third-generation anthrax vaccine. HHS issues broad RFI regarding Technology Readiness Levels for medical countermeasures. HHS issues draft Public Health Emergency Medical Countermeasure Enterprise (PHEMCE) Strategy. FDA issues clinical hold notice on Vaxgen’s trial. HHS issues “cure” notice on VaxGen. HHS terminates contract with VaxGen for rPA anthrax vaccine. NIAID cancels RFP for third-generation anthrax vaccine. HHS issues PHEMCE Strategy. HHS issues PHEMCE Implementation Plan. Biomedical Advanced Research and Development Authority (BARDA) releases presolicitation notice for BioThrax. BARDA releases sources sought notice for rPA vaccine. The following are GAO’s comments on the Department of Health and Human Services’ letter dated October 4, 2007. 1. Our draft report acknowledged the Office of the Assistant Secretary for Preparedness and Response’s (ASPR) sense of urgency to develop an rPA anthrax vaccine following the 2001 attack. However, our report also stated that by November 2004, ASPR had had sufficient time and opportunity to thoroughly evaluate contractual risks and issues without being overly influenced by the sense of urgency. By November 2004, it was clear that significant manufacturing issues needed to be overcome and that a 2-year time scale to produce 25 million doses was accordingly unrealistic. 2. We agree that ASPR has taken several steps to develop and communicate its strategy and plans to acquire medical countermeasures to potential manufacturers. In addition, HHS has conducted several workshops to stimulate discussion with potential manufacturers. However, these steps were taken just before or after VaxGen’s procurement contract was terminated. While we reviewed the HHS Public Health Emergency Medical Countermeasures Enterprise Strategy and Implementation Plan for Chemical, Biological, Radiological, and Nuclear Threats, we did not find these documents to be relevant to our evaluation of ASPR’s performance with regard to VaxGen’s procurement contract. 3. ASPR’s definition of the concept of use refers, as expressed in its comments, to the anthrax vaccine in combination with antibiotics as post- exposure prophylaxis. However, our report discusses the potential use of the unlicensed rPa vaccine in the stockpile when the licensed anthrax vaccine was already available. We cite the Food and Drug Administration’s position that it would give preference to the licensed vaccine over the unlicensed vaccine. With regard to critical requirements, HHS acknowledged that critical requirements would change for different products. Therefore, HHS should have known the consequences of changing requirements for a fixed-price contract with a 2-year time limit. 4. We agree with HHS that it is not always possible to know the exact regulatory specifications for a product at the beginning of the procurement process. However, ASPR failed to recognize that changing requirements under a fixed-price procurement contract could significantly affect the finances and the 2-year delivery time line it established. 5. The acting director of ASPR told us that the principal deputy of ASPR had decided not to destroy the expired lots in case they were needed for use in an emergency. However, using the expired vaccine would violate the FDA rule. In response to the draft of this report, HHS now states that it is quarantining the expired lots until a decision can be made regarding disposal. We do not understand HHS’s rationale for continuing to hold the vaccine in quarantine for nearly a year and the justification for the administrative expenses involved. 6. Although HHS and the Department of Defense (DOD) use different authorities to address BioThrax liability and funding issues, both authorities could apply to vaccines purchased by either DOD or HHS; consequently, indemnity does not appear to be an insurmountable obstacle for future procurements. As indicated in our report, DOD and HHS should continue to explore the legal implications of different indemnity authorities and present a legislative proposal to Congress if they determine that a statutory change is required to establish a joint inventory. 7. Since, as ASPR acknowledges, it does not have a strategy to minimize waste, we calculated the potential $100 million annual wastage based on expiration dates of the current vaccine inventory. ASPR stated that the annual saving would only be up to $25 million per year but did not provide any basis for this estimate. However, according to DOD, in contract year 2006, it purchased BioThrax valued at about $55 million, a savings of more than double ASPR’s estimate. A strategy to minimize waste in the stockpile should include not only integration of inventory based on a first-in, first-out principle but also reexamination of requirements derived from consequence modeling with regard to the size of the inventory. Such a strategy would result in savings closer to $100 million. 8. We did not mean to suggest that all expired products represent waste or lost investment. We clarified our definition of waste in the report. When there is a large-volume user for the stockpile product, not having an effective strategy to ensure that stockpile product would be used constitutes waste. However, since DOD is a large user of BioThrax, unnecessary waste will result from ASPR not making an effort to ensure that to the extent possible, DOD uses the vaccine in the stockpile. 9. We did not question the legality of the contract award to VaxGen but rather the rationale underlying the contract’s requirement for 25 million doses in 2 years. 10. ASPR officials told us that they did not have tools to assess product maturity at the time of the contract award, and that they were guided by a sense of urgency. On the basis of these statements, we concluded that their assessment was subjective. 11. We disagree that the VaxGen Project BioShield award did not preempt other support for product development that was being provided to VaxGen through its National Institute of Allergy and Infectious Diseases contract. According to our analysis of the contract document and discussions with NIAID officials, funding under the development contract largely ceased once the procurement contract was awarded. 12. We clarified the report text to attribute to VaxGen officials the statement that the purchase of BioThrax for the stockpile as a stopgap measure raised the bar for the VaxGen vaccine. 13. Our draft report did not say that HHS changed the requirements for the VaxGen rPA vaccine. However, we have clarified the text to state that purchase of BioThrax for the stockpile raised the requirement for the use of rPA anthrax vaccine. 14 We clarified the report text to indicate that neither FDA nor VaxGen understood the concept of use prior to January 2006. 15. We clarified the report text to indicate that ASPR officials told us that FDA would define “sufficient data” and the testing hurdles a product needed to overcome to be considered a “usable product.” 16. See our response to comment 8. The following is GAO’s comment on the Department of Defense’s letter dated October 3, 2007. 1. Although HHS and DOD use different authorities to address BioThrax liability, both authorities could apply to vaccines purchased by either DOD or HHS; consequently, indemnity does not appear to be an insurmountable obstacle for future procurements. As indicated in our report, DOD and HHS should continue to explore the legal implications of different indemnity authorities and present a legislative proposal to Congress if they determine that a statutory change is required to establish a joint inventory. | The anthrax attacks in September and October 2001 highlighted the need to develop medical countermeasures. The Project BioShield Act of 2004 authorized the Department of Health and Human Services (HHS) to procure countermeasures for a Strategic National Stockpile. However, in December 2006, HHS terminated the contract for a recombinant protective antigen (rPA) anthrax vaccine because VaxGen failed to meet a critical contractual milestone. Also, supplies of the licensed BioThrax anthrax vaccine already in the stockpile will start expiring in 2008. GAO was asked to identify (1) factors contributing to the failure of the rPA vaccine contract and (2) issues associated with using the BioThrax in the stockpile. GAO interviewed agency and industry officials, reviewed documents, and consulted with biodefense experts. Three major factors contributed to the failure of the first Project BioShield procurement effort for an rPA anthrax vaccine. First, HHS's Office of the Assistant Secretary for Preparedness and Response (ASPR) awarded the procurement contract to VaxGen, a small biotechnology firm, while VaxGen was still in the early stages of developing a vaccine and had not addressed many critical manufacturing issues. This award preempted critical development work on the vaccine. Also, the contract required VaxGen to deliver 25 million doses of the vaccine in 2 years, which would have been unrealistic even for a larger manufacturer. Second, VaxGen took unrealistic risks in accepting the contract terms. VaxGen officials told GAO that they accepted the contract despite significant risks due to (1) the aggressive delivery time line for the vaccine, (2) VaxGen's lack of in-house technical expertise--a condition exacerbated by the attrition of key company staff as the contract progressed--and (3) VaxGen's limited options for securing any additional funding needed. Third, important Food and Drug Administration (FDA) requirements regarding the type of data and testing required for the rPA anthrax vaccine to be eligible for use in an emergency were not known at the outset of the procurement contract. In addition, ASPR's anticipated use of the rPA anthrax vaccine was not articulated to all parties clearly enough and evolved over time. Finally, according to VaxGen, the purchase of BioThrax for the stockpile as a stopgap measure raised the bar for the VaxGen vaccine. All these factors created confusion over the acceptance criteria for VaxGen's product and significantly diminished VaxGen's ability to meet contract time lines. ASPR has announced its intention to issue another request for proposal for an rPA anthrax vaccine procurement but, along with other HHS components, has not analyzed lessons learned from the first contract's failure and may repeat earlier mistakes. According to industry experts, the lack of specific requirements is a cause of concern to the biotechnology companies that have invested significant resources in trying to meet government needs and now question whether the government can clearly define future procurement contract requirements. GAO identified two issues related with the use of the BioThrax in the Strategic National Stockpile. First, ASPR lacks an effective strategy to minimize the waste of BioThrax. Starting in 2008, several lots of BioThrax in the Strategic National Stockpile will begin to expire. As a result, over $100 million per year could be lost for the life of the vaccine currently in the stockpile. ASPR could minimize such potential waste by developing a single inventory system with Department of Defense (DOD)--a high-volume user of BioThrax--with rotation based on a first-in, first-out principle. DOD and ASPR officials identified a number of obstacles to this type of rotation which may require legislative action. Second, ASPR planned to use three lots of expired BioThrax vaccine in the stockpile in the event of an emergency. This would violate FDA rules, which prohibit using an expired vaccine, and could also undermine public confidence because the vaccine's potency could not be guaranteed. |
The five principal emissions from coal power plants are carbon dioxide, SO, particulate matter, and mercury. For the purposes of this report, we are focusing on power plants’ emissions of SO, NO, and particulate matter since they, along with ozone, are the focus of a rule currently proposed by EPA—the Transport Rule—which seeks to limit the interstate transport of emissions of SO in order to abate violations of particulate matter and ozone NAAQS in downwind states. According to an EPA analysis, as of 2008, power plants emitted over 65 percent of SO emissions and almost 20 percent of NO emissions, nationwide. These emissions impact local air quality, but they can also travel hundreds of miles to impact the air quality of downwind states. In developing the Transport Rule, EPA has found that emissions of SO from 31 eastern states and the District of Columbia prevent downwind states from meeting NAAQS for ozone and particulate matter. SO and NO emissions contribute to the formation of fine particulate matter, and NO emissions contribute to the formation of ozone, which can cause or aggravate respiratory illnesses. the atmosphere, known as volatile organic compounds; and sunlight. Cars and power plants that burn fossil fuels are contributors of NO pollution. stack. In 1970, there were only 2 stacks higher than 500 feet in the United States, but this number had increased to more than 180 by 1985. While constructing a tall stack is a dispersion technique that helps to reduce pollution concentrations in the local area, using tall stacks does not reduce total emissions that can potentially be transported to downwind states. The 1977 amendments to the Clean Air Act discouraged the use of dispersion techniques to help attain NAAQS. Specifically, section 123 prohibits states from counting the dispersion effects of stack heights in excess of a stack’s GEP height when determining a source’s emissions limitation. The Clean Air Act defines GEP as “the height necessary to insure that emissions from the stack do not result in excessive concentrations of any air pollutant in the immediate vicinity of the source as a result of atmospheric downwash, eddies, or wakes which may be created by the source itself, nearby structures, or nearby terrain obstacles.” According to federal regulations, a stack’s GEP height is the higher of 65 meters, measured from the ground-level elevation at the base of the stack; a formula based on the height and width of nearby structure(s) (height plus 1.5 times the width or height, whichever is lesser); or the height demonstrated by a fluid model or field study that ensures the emissions from a stack do not result in excessive concentrations of any air pollutant as a result of atmospheric downwash created by the source itself, nearby structures, or nearby terrain features. Downwash occurs when large buildings or local terrain distort or impact wind patterns, and an area of more turbulent air forms, known as a wake. Emissions from a stack at a power plant can be drawn into this wake and brought down to the ground near the stack more quickly (see fig. 1). States issue air permits to major stationary sources of air pollution, such as power plants, and determine GEP for stacks when they set emissions limitations for these sources. Emissions limitations may be reset when plants undergo New Source Review. New Source Review is a preconstruction permitting program which requires a company that constructs a new facility or makes a major modification to an existing facility to meet new, more stringent emissions limitation based on the current state of pollution control technology. A stack’s GEP height is used in air dispersion modeling that takes place when emissions limitations are developed for a source as part of the permitting process. Many sources contribute to levels of pollution that affect the ability of downwind states to attain and maintain compliance with NAAQS, and some of these pollutants may originate hundreds or thousands of miles from the areas where violations are detected. The Clean Air Act’s “good neighbor provisions” under section 110 of the Act require states to prohibit emissions that significantly contribute to nonattainment or interfere with maintenance of NAAQS in downwind states or which will interfere with downwind states’ ability to prevent significant deterioration of air quality. Section 126 of the Clean Air Act also allows a downwind state to petition EPA to determine that specific sources of air pollution in upwind states interfere with the downwind state’s ability to protect air quality and for EPA to impose emissions limitations directly on these sources. As detailed in the timeline below, Congress granted EPA increased authority to address interstate transport of air pollution under the Clean Air Act, and EPA acted on this authority. 1977 amendments to the Clean Air Act. These amendments contained two provisions that focused on interstate transport of air pollution, the predecessor to the current good neighbor provision of section 110 of the Act and section 126. These amendments also established the New Source Review program. 1990 amendments to the Clean Air Act. These amendments added the Acid Rain Program (Title IV) to the Clean Air Act, which created a cap- and-trade program for SO emissions by 10 million tons from 1980 levels and reducing annual NO emissions by 2 million tons from 1980 levels by the year 2000. 1998 NO SIP Call. After concluding that NO emissions from 22 states and the District of Columbia contributed to the nonattainment of NAAQS for ozone in downwind states, EPA required these states to amend their SIPs to reduce their NO emissions. EPA took this regulatory action based on section 110 of the Clean Air Act. 2005 Clean Air Interstate Rule (CAIR). This regulation required SIP revisions in 28 states and the District of Columbia that were found to contribute significantly to nonattainment of NAAQS for fine particulate matter and ozone in downwind states. CAIR required reductions for SO emissions from 28 eastern states and the District of Columbia and included an option for states to meet these reductions through regional cap-and-trade programs. When the rule was finalized, EPA estimated it would annually reduce SO and NO emissions by 3.8 million and 1.2 million tons, respectively, by 2015. The U.S. Court of Appeals remanded CAIR to EPA in 2008 because it found significant flaws in the approach EPA used to develop CAIR, but allowed the rule to remain in place while EPA develops a replacement rule. 2010 Transport Rule. EPA proposed this rule to replace CAIR, which aims to reduce emissions of SO from power plants. If finalized as written, the rule would require emissions of SO to decrease 71 percent over 2005 levels and emissions of NO to decrease by 52 percent over 2005 levels by 2014. As described above, EPA’s efforts to address the interstate transport of air pollution from power plants have focused on reducing the total emissions of SO from these plants. Unlike tall stacks, pollution controls help to reduce the actual emissions from power plants by either reducing the formation of these emissions or capturing them after they are formed. At coal power plants, these controls are generally installed in either the boiler, where coal is burned, or the duct work that connects a boiler to the stack. A single power plant can use multiple boilers to generate electricity, and the emissions from multiple boilers can sometimes be connected to a single stack. Figure 2 shows some of the pollution controls that may be used at coal power plants: fabric filters or electrostatic precipitators (ESP) to control particulate matter, flue gas desulfurization (FGD) units—known as scrubbers—to control SO emissions, and selective catalytic reduction (SCR) or selective non-catalytic reduction (SNCR) units to control NO emissions. The reduction in emissions from a coal power plant by the use of pollution controls can be substantial, as shown in table 1. The installation of pollution control equipment can also be expensive. According to a Massachusetts Institute of Technology study of coal power plants, it may cost anywhere from $215,000 per megawatt to $330,000 per megawatt to install controls at a coal power plant for particulate matter, SO. For a typical coal power plant with a capacity of 500 megawatts, this means that it could cost from $107 million to install these controls at a newly built facility up to $165 million to retrofit these controls at an existing facility. Additionally, pollution controls can require additional energy to operate, known as an energy penalty. According to our analysis of EIA data, which we updated with our survey results, we found a total of 284 tall smokestacks were operating at 172 coal power plants in 34 states, as of December 31, 2010. While about half of the tall stacks began operating more than 30 years ago, there has been an increase in the number of tall stacks that have begun operating in the last 4 years, which several stakeholders attributed to the need for new stacks when retrofitting existing plants with pollution control equipment. As of December 31, 2010, we found a total of 284 tall stacks were operating at 172 coal power plants in the United States. These tall stacks account for about 35 percent of the 808 stacks operating at coal power plants in the United States, and they are generally located at larger power plants. Specifically, we found these stacks are associated with 64 percent of the coal generating capacity. We found that 207 tall stacks (73 percent) are between 500 and 699 feet tall and that 63 stacks (22 percent) are between 700 and 999 feet tall. The remaining 14 stacks (5 percent) are 1,000 feet tall or higher, with the tallest stack at a coal power plant in the United States having a height of 1,038 feet at the Rockport Power Plant in Indiana. In figure 3, we show how a tall stack compares to the heights of other well-known structures. Thirty-five percent of the 284 tall stacks are concentrated in 5 states along the Ohio River Valley—Kentucky, Ohio, Indiana, Illinois, and Pennsylvania—at 59 coal power plants. Another 32 percent are located in Alabama, Missouri, West Virginia, Michigan, Georgia, Wyoming, Wisconsin, and Texas, while the remaining 33 percent of tall stacks are located across 21 other states. Figure 4 shows the location of coal power plants with operating tall stacks. For counts of all tall stacks by state, see appendix II. Forty-six percent of the 284 tall stacks operating at coal power plants in the United States as of December 31, 2010, went into service before 1980. Another 28 percent went into service in the 1980s, 7 percent went into service in the 1990s, and 18 percent went into service since 2000. Of the stacks that went into service since 2000, a vast majority went into service in the last 4 years, as shown in figure 5. Stack height is one of several factors that contribute to the interstate transport of air pollution. While the use of pollution controls has increased in recent years at coal power plants, several boilers connected to tall stacks remain uncontrolled for certain pollutants. Stack height is one of several factors that contribute to the interstate transport of air pollution. According to reports and stakeholders with expertise on this topic, tall stacks generally disperse pollutants over longer distances than shorter stacks and provide pollutants with more time to react in the atmosphere to form ozone or particulate matter. However, the interstate transport of air pollution is a complex process that involves several variables—such as total emissions from a stack, the temperature and velocity of the emissions, and weather—in addition to stack height. As a result, stakeholders had difficulty isolating the exact contribution of stack height to the interstate transport of air pollution, and we found limited research on this specific topic. For example, EPA staff involved in the modeling of interstate transport told us that it is difficult to determine the different impacts that stacks of varying heights have on the transport of air pollution. According to one atmospheric scientist we spoke with, the interstate transport of air pollution is a complex process and stack height represents just one variable in this process. Stakeholders struggled to identify the precise impact of tall stacks, due in part to the other factors that influence how high emissions from a stack will rise. The temperature and velocity of a stack’s emissions, along with its height, contribute to what is known as an “effective stack height.” Effective stack height takes into account not only the height at which emissions are released, but also how high the plume of emissions will rise, which is influenced by the temperature and velocity of these emissions. One atmospheric scientist told us the emissions from a shorter stack could rise higher than a taller stack, depending on the temperature and velocity of the emissions. Weather also plays a key role in the transport of air pollution. A study by the Northeast States for Coordinated Air Use Management (NESCAUM)–– a group that represents state air agencies in the Northeast––described weather patterns that can contribute to high-ozone days in the Ozone Transport Region, which includes 12 states in the Mid-Atlantic and New England regions and the District of Columbia. These high-ozone days typically occur in the summer on hot days, when the sun helps transform NO and volatile organic compounds into ozone. Wind speeds and wind direction also help to determine how emissions will travel. In the Mid- Atlantic United States, the wind generally blows from west to east during the day, and wind speeds are generally faster at higher elevations. The time of day can also influence the transport of air pollution. According to the NESCAUM report and researchers we spoke with, ozone can travel hundreds of miles at night with the help of high-speed winds known as the low-level jet. This phenomenon typically occurs at night when an atmospheric inversion occurs due to the ground cooling quicker than the upper atmosphere. A boundary layer can form between these two air masses several hundred feet off the ground, which can allow the low-level jet to form and transport ozone and particulate matter with its high winds. As the atmosphere warms the following day, this boundary layer can break down and allow these transported emissions to mix downward and affect local air quality. Air dispersion models typically take into account stack height along with these other factors when predicting the transport of emissions from power plants. For example, EPA used the Comprehensive Air Quality Model with Extensions (CAMx) to conduct the modeling to support the development of the Transport Rule. CAMx is a type of photochemical grid model, which separates areas into grids and aims to predict the transport of sources that lie within these grids. Key inputs into this model include stack height, the velocity and temperature of emissions, and weather data. EPA staff involved in conducting this modeling for the Transport Rule said they use the CAMx model to predict the actual impacts of air emissions, and they have not used this model to estimate the specific impact of stack height on interstate transport. They reported their modeling efforts in recent years have been done in support of CAIR and the Transport Rule, and have been focused on modeling the regional impacts of reducing total air emissions. Several stakeholders we spoke with said total emissions is a key contributor to interstate transport of air pollution, and the use of pollution controls at coal power plants is critical to reducing interstate transport of air pollution. Reducing the total emissions from a power plant influences how much pollution can react in the atmosphere to form ozone and particulate matter that can ultimately be transported. The use of pollution control equipment, particularly for SO and NO emissions, has increased over time, largely in response to various changes in air regulations, according to stakeholders and reports we reviewed. According to EIA data, the generating capacity of power plants that is controlled by FGDs has increased from about 87,000 megawatts to about 140,000 megawatts from 1997 to 2008. Since coal power plants had about 337,000 megawatts of generating capacity in 2008, this means that about 42 percent of the generating capacity was controlled by a FGD in 2008. Similarly, SCRs were installed at about 44,000 megawatts worth of capacity from 2004 through 2009, with about one-third of these installations occurring in 2009 alone, according to an EPA presentation on this topic. EPA and state officials, along with electric utility officials, told us that the increase in the use of these pollution controls is due to various air regulations, such as the Acid Rain Program and CAIR, which focused on reducing SO emissions. However, while we found that the use of pollution controls at coal power plants has increased in recent years, many boilers remain uncontrolled for certain pollutants, including several connected to tall stacks. For example, we found that 56 percent of the boilers attached to tall stacks at coal power plants do not have a FGD to control SO emissions. Collectively, we found that these uncontrolled boilers accounted for 42 percent of the total generating capacity of boilers attached to tall stacks. Our findings on FGDs are similar to EPA data on all coal power plants. In 2009, EPA estimated that 50 percent of the generating capacity of coal power plants did not have FGDs. For NO controls, we found that while about 90 percent of boilers attached to tall stacks have combustion controls in place to reduce the formation of NO emissions, a majority of these boilers lack post-combustion controls that can help to reduce NO emissions to a greater extent. Specifically, 63 percent of boilers connected to tall stacks do not have post-combustion controls for NO, such as SCRs or SNCRs, which help reduce NO emissions more than combustion controls alone. Collectively, we found that these boilers without post-combustion controls accounted for 54 percent of the total generating capacity of boilers attached to tall stacks. EPA data on all coal power plants show that 53 percent of the generating capacity for coal power plants did not have post-combustion controls for NO emissions in place in 2009. Tall stacks that had uncontrolled SO emissions were generally attached to older boilers that went into service prior to 1980. We found that approximately 85 percent of boilers without FGDs that were attached to tall stacks went into service before 1980. Similarly, over 70 percent of the boilers without post-combustion controls for NO went into service before 1980. Overall, we found that about 82 percent of the boilers that lacked both a FGD and post-combustion controls for NO went into service before 1980. Some stakeholders attributed the lack of pollution controls on older boilers to less stringent standards that were applied at the time the boilers were constructed. As discussed above, companies that construct a new facility or make a major modification to an existing facility must meet new emissions limitations based on the current state of pollution control technology. Because pollution control technology has advanced over time, the standards have become more stringent over time, meaning that boilers constructed before 1980 would have had higher allowable emissions and less need to install controls than boilers constructed in 2010. Unlike our findings on FGDs and post-combustion controls for NO emissions, we found that 100 percent of boilers attached to tall stacks were controlled for particulate matter. However, it is important to note that plants with uncontrolled SO and NO emissions contribute to the formation of additional particulate matter in the atmosphere. We identified 48 tall stacks built since 1988 that states reported are subject to the GEP provisions of the Clean Air Act and for which states could provide GEP height information. Of these 48 stacks, we found that 17 exceed their GEP height, 19 are at their GEP height, and 12 are below their GEP height. We found that 15 of the 17 stacks built above GEP were replacement stacks that were built as part of the process of installing pollution control equipment. These stacks vary in the degree to which they exceed GEP height, ranging from less than 1 percent above GEP to about 46 percent above GEP, as shown in table 2. The other 2 stacks built above GEP exceed their GEP height by 2 percent or less. When we followed up with utility officials regarding why these stacks were built above GEP, they reported that a variety of factors can influence stack height decisions. These factors included helping a plant’s emissions clear local geographic features, such as valley walls. According to one state air protection agency, three stacks were built above GEP to provide further protection against downwash. Officials from two utilities said they built stacks above GEP at coal power plants to account for the impact of other structures, such as cooling towers, on the site. Other stakeholders said that utilities may be hesitant to lower stack heights at their facilities when replacing a stack because plant officials have experience with that stack height and its ability to help protect against downwash. An official from one company that builds stacks told us this practice has sometimes occurred because utilities do not want the moisture-rich emissions from the replacement stack to hasten the deterioration of the old stacks, which are usually left in place and must be maintained. In addition, this moisture can create large icicles on the older stacks, which can present a danger to staff working at the power plant. Other stakeholders highlighted factors that may play a role in making stack height decisions. Some federal and state officials reported that generally there is little incentive to build a stack above GEP because a facility will not receive dispersion credit for the stack’s height above GEP. Other stakeholders acknowledged that a stack could be built above GEP for site-specific reasons, such as helping emissions clear nearby terrain features. Some of these officials also noted that cost was another factor considered when making stack height decisions, as it is generally more costly to build a higher stack. For example, one utility official told us that two replacement stacks that were recently built below their original heights could meet their emissions limitations with these lower stack heights because the utility was installing pollution control equipment and did not want to incur the additional cost of building a taller stack. We found that stacks built above GEP since 1988 generally were attached to boilers that had controls in place for SO, and particulate matter, as shown in table 3. We found similar results for stacks that were built at or below their GEP heights. We were unable to obtain GEP height information for an additional 25 stacks that were built since 1988 for two reasons. First, some of these stacks replaced stacks that were exempt from the GEP regulations, according to state officials. Section 123 of the Clean Air Act exempts stack heights that were in existence on or before December 31, 1970, from the GEP regulations; because the exemption applies to stack heights rather than to stacks themselves, it covers both original and replacement stacks. Second, states did not have GEP information readily available for some stacks. According to state officials, they did not set new emissions limits at the time these replacement stacks were built because they were part of pollution control projects and emissions from these plants did not increase. For example, one state reported that GEP could have been calculated decades earlier for the original stacks when emissions limitations were set, and they were unable to locate this information in response to our request. According to EPA staff we spoke with about this issue, states are not required to conduct a GEP analysis in these instances. While we were unable to obtain GEP information for these stacks, our analysis of the pollution controls installed at boilers connected to these stacks yielded similar results to those stacks for which we did obtain GEP information. Specifically, all of these boilers had controls for SO. We provided a draft of this report to EPA and DOE for review and comment. Both EPA and DOE stated they had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretary of Energy, Administrator of EPA, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To identify the number and location of smokestacks at coal power plants that were 500 feet or higher as of December 31, 2010, we analyzed data on power plants from the Department of Energy’s (DOE) Energy Information Administration (EIA). We also used these data to determine when these stacks began operating. To determine the reliability of these data, we reviewed documentation from EIA, interviewed relevant officials who were involved in collecting and compiling the data, conducted electronic testing of the data, and we determined that the data were sufficiently reliable for our purposes. Because the EIA data were collected in 2008, we contacted all 50 states and the District of Columbia to determine if they had tall stacks and developed and administered a survey to those 38 states with tall stacks to update the relevant EIA data and determine if any changes had taken place in the number or operating status of stacks since that time. We received e-mail addresses for each state from the Web site of the National Association of Clean Air Agencies, which represents air pollution control agencies in 53 states and territories, and developed a survey that we sent to respondents as an e-mail attachment. Prior to sending out this survey, we pretested the survey with officials from 2 states and revised some of the survey questions based on their input. We received responses to our survey from all 38 states and we sent follow-up questions based on their survey responses to clarify certain responses or to ask for additional information. We updated the relevant EIA data with these survey results to include the most recent information available on tall stacks. We did not include tall stacks that were used as bypass stacks only in times of maintenance or emergencies in our count of tall stacks. State officials reported that bypass stacks are rarely used and would not be used at the same time as plants’ fully operating stacks. Additionally, we defined multi-flue stacks—those with multiple flues running within a single casing—as one stack, as opposed to counting each flue as a separate stack. A state modeling official told us they consider multi-flue stacks as single stacks when conducting dispersion modeling. For the purposes of this report, we defined tall smokestacks to be those that were 500 feet or higher. In our interviews with stakeholders, several told us they considered 500 feet to be a “tall” stack. Some stakeholders said that a typical boiler building at a coal power plant is about 200 feet high. Given that the original formula for good engineering practice (GEP) was 2.5 times height of nearby structures, this would equal about 500 feet. Other stakeholders reported that they considered a stack built above GEP to be “tall.” To determine what is known about tall stacks’ contribution to the interstate transport of air pollution, we reviewed reports from the Environmental Protection Agency (EPA) and academics and spoke with stakeholders with expertise on this topic. We conducted a literature search of engineering and other relevant journals on the topic of stack height and interstate transport of air pollution, and we reviewed the limited amount of literature we identified. The stakeholders we interviewed included EPA officials involved in modeling interstate transport of air pollution from power plants, officials from utilities and construction firms that design and build power plants, atmospheric scientists who conduct research on this topic, and state officials who are involved in permitting power plants and complying with federal regulations governing the interstate transport of air pollution. We also analyzed the EIA data and our survey results to determine the pollution control equipment installed at coal power plants with stacks 500 feet or higher. Specifically, we identified the control equipment that was associated with boilers that were attached to tall stacks. Pollution control equipment is not installed on stacks themselves; rather it is installed in the boilers or the ductwork that connect the boiler to a stack. We also interviewed stakeholders to learn about trends in installing pollution control equipment and reviewed relevant reports on this topic. To determine the number of tall stacks that have been built above their GEP height since 1988, we used our survey to obtain information from state officials about the GEP height for these stacks. Twenty-two states had stacks that were over 500 feet that were built since 1988, and we received survey responses from all of them. In our survey, we also asked for reasons that a stack was built above GEP, when applicable. In cases where state officials could not provide specific reasons, we contacted the utilities that operate the plants with these stacks to obtain this information. Specifically, we contacted utilities that were involved in operating 15 of the 17 stacks that were built since 1988 and exceed GEP height, and we were able to interview utilities operating 12 of these stacks. We did not contact the utilities that operate the other 2 stacks, because the stacks are each less than 2 feet above GEP. We also interviewed companies that design and build power plants to ask about some of the general factors that are considered when deciding on stack height. We focused on stacks built since 1988, because that was the year that EPA’s regulations for determining GEP height were largely affirmed by the District of Columbia Court of Appeals. EPA began the process of developing these regulations in the late 1970s, but the final regulations were not issued until 1985. The regulations were then challenged in court and were largely affirmed in 1988. Finally, we conducted site visits to two coal power plants in Ohio. We selected this state because it contained several coal power plants with tall stacks, including some stacks that were built in 1988 or later. During this visit, we interviewed utility officials that operated these plants, along with state and local officials involved in permitting these plants. We conducted this work from July 2010 through May 2011 in accordance with all sections of GAO’s quality assurance framework that are relevant to our objectives. This framework requires that we plan and perform the engagement to obtain sufficient, appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. Table 4 provides counts of the number of stacks 500 feet or higher—tall stacks—by state. In addition, the table provides information on the generating capacity of the boilers attached to these stacks. In addition to the individual named above, key contributors to this report include Barbara Patterson (Assistant Director), Scott Heacock, Beth Reed Fritts, and Jerome Sandau. Important assistance was also provided by Antoinette Capaccio, Cindy Gilbert, Alison O’Neill, Madhav Panwar, and Katherine Raheb. | Tall smokestacks--stacks of 500 feet or higher, which are primarily used at coal power plants--release air pollutants such as sulfur dioxide (SO2) and nitrogen oxides (NOx) high into the atmosphere to help limit the impact of these emissions on local air quality. Tall stacks can also increase the distance these pollutants travel in the atmosphere and harm air quality and the environment in downwind communities. The 1977 amendments to the Clean Air Act encourage the use of pollution control equipment over dispersion techniques, such as tall stacks, to meet national air standards. Section 123 of the Act does not limit stack height, but prohibits sources of emissions from using the dispersion effects of stack heights in excess of a stack's good engineering practice (GEP) height to meet emissions limitations. GAO was asked to report on (1) the number and location of tall stacks of 500 feet or higher at coal power plants and when they began operating; (2) what is known about such stacks' contribution to the interstate transport of air pollution and the pollution controls installed at plants with these stacks; and (3) the number of stacks that were built above GEP height since 1988 and the reasons for this. GAO analyzed Energy Information Administration (EIA) data on power plants, surveyed states with tall stacks, and interviewed experts on the transport of air pollution. GAO is not making recommendations in this report. The Environmental Protection Agency and the Department of Energy stated they had no comments on this report According to analysis of EIA data, which were updated with GAO's survey results, a total of 284 tall smokestacks were operating at 172 coal power plants in 34 states, as of December 31, 2010. Of these stacks, 207 are 500 to 699 feet tall, 63 are 700 to 999 feet tall, and the remaining 14 are 1,000 feet tall or higher. About one-third of these stacks are concentrated in 5 states along the Ohio River Valley. While about half of tall stacks began operating more than 30 years ago, there has been an increase in the number of tall stacks that began operating in the last 4 years, which air and utility officials attributed to the need for new stacks when plants installed pollution control equipment. Stack height is one of several factors that contribute to the interstate transport of air pollution. According to reports and stakeholders with expertise on this topic, tall stacks generally disperse pollutants over greater distances than shorter stacks and provide pollutants greater time to react in the atmosphere to form ozone and particulate matter. However, stakeholders had difficulty isolating the exact contribution of stack height to the transport of air pollution because this is a complex process that involves several other variables, including total emissions from a stack, the temperature and velocity of the emissions, and weather. The use of pollution controls, which are generally installed in boilers or the duct work that connects a boiler to a stack, has increased in recent years at coal power plants. However, GAO found that many boilers remain uncontrolled for certain pollutants, including several connected to tall stacks. For example, GAO found that 56 percent of boilers attached to tall stacks lacked scrubbers to control SO2 and 63 percent lacked post-combustion controls to capture NOx emissions. In general, GAO found that boilers without these controls tended to be older, with in-service dates prior to 1980. GAO identified 48 tall stacks built since 1988--when GEP regulations were largely affirmed in court--that states reported are subject to the GEP provisions of the Clean Air Act and for which states could provide GEP height information. Of these 48 stacks, 17 exceed their GEP height, 19 are at their GEP height, and 12 are below their GEP height. Section 123 of the Clean Air Act defines GEP as the height needed to prevent excessive downwash, a phenomenon that occurs when nearby structures disrupt airflow and produce high local concentrations of pollutants. Officials reported that a variety of factors can influence stack height decisions. For example, some utility officials reported that stacks were built above GEP to provide greater protection against downwash or to help a plant's emissions clear local geographic features, such as valley walls. GAO was unable to obtain GEP height information for an additional 25 stacks that were built since 1988 for two reasons: (1) some of these stacks were exempt from GEP regulations, and (2) states did not have GEP information readily available for some replacement stacks because the GEP calculation was sometimes made decades earlier and a recalculation was not required at the time the replacement stack was built. |
The mission of IRS, a bureau within the Department of the Treasury, is to provide America’s taxpayers top quality service by helping them understand and meet their tax responsibilities and by applying the federal tax laws with integrity and fairness to all. In carrying out its mission, IRS annually collects over $2 trillion in taxes from millions of individual taxpayers and numerous other types of taxpayers and manages the distribution of over $300 billion in refunds. To guide its future direction, the agency has two strategic goals: (1) deliver high quality and timely service to reduce taxpayer burden and encourage voluntary compliance; and (2) effectively enforce the law to ensure compliance with tax responsibilities and combat fraud. IRS has established seven overarching priorities to accomplish its mission: facilitate voluntary compliance by empowering taxpayers with secure and innovative services, tools, and support; understand non-compliant taxpayer behavior, and develop approaches to deter and change it; leverage and collaborate with external stakeholders; cultivate a well-equipped, diverse, skilled, and flexible workforce; select highest value work using data analytics and a robust feedback loop; drive more agility, efficiency, and effectiveness in IRS operations; and strengthen cyber defense and prevent identity theft. The mission of IRS’s Information Technology organization is to deliver IT services and solutions that drive effective tax administration to ensure public confidence. It is led by the Chief Technology Officer, who reports to the Deputy Commissioner for Operations Support of the IRS. Several subordinate offices report to the Chief Technology Officer. Figure 1 shows the structure of IRS’s Information Technology organization. IT plays a critical role in enabling IRS to carry out its mission and responsibilities. For example, the agency relies on information systems to process tax returns, account for tax revenues collected, send bills for taxes owed, issue refunds, assist in the selection of tax returns for audit, and provide telecommunications services for all business activities, including the public’s toll-free access to tax information. For fiscal year 2016, IRS is pursuing 23 major and 114 non-major IT investments to carry out its mission. These investments generally support (1) day-to-day operations (which include operations and maintenance, as well as development, modernization, and enhancements to existing systems), and (2) modernization efforts in support of IRS’s future goals. The day-to-day operations are primarily funded via the operations support appropriation account, user fees and other supplemental funding. The modernization efforts are funded via the business systems modernization appropriation account. IRS expects to spend about $2.7 billion for IT, including $2.2 billion in appropriated funds, $391.9 million in user fees, and $108.2 million in other supplemental funding. Approximately $1.4 billion of IRS’s IT funding for fiscal year 2016 supports the two operational investments (TSS and MSSS), and four development investments (FATCA, ACA, CADE 2, and RRP) that we selected for review. TSS supports IRS’s network infrastructure services such as network equipment, video conference service, enterprise fax service, and voice service for over 85,000 IRS employees at about 1,000 IRS locations. According to IRS, the investment continues delivery of services and products to employees which translates into service to taxpayers. IRS allocated approximately $366.6 million to activities supporting the TSS investment. Table 1 identifies the fiscal year 2016 funding allocation for the TSS investment, as well as the types of activities being funded. MSSS provides for the design, development, and deployment of server, middleware, and large systems as well as enterprise storage infrastructures, including systems software products, databases, and operating systems for these platforms. For fiscal year 2016, IRS allocated approximately $454.2 million for activities supporting the MSSS investment. Table 2 identifies the fiscal year 2016 funding allocation for the MSSS investment, as well as the types of activities being funded. FATCA is intended to improve tax compliance by identifying U.S. taxpayers that attempt to shield or divert assets by depositing funds in foreign accounts. A law enacted in 2010 requires foreign financial institutions to report to the IRS information regarding financial accounts held by U.S. taxpayers or foreign entities in which U.S. taxpayers have a substantial ownership interest. IRS allocated $89.1 million to FATCA for fiscal year 2016. ACA encompasses the planning, development, and implementation of IT systems needed to support IRS’s tax administration responsibilities associated with parts of the Patient Protection and Affordable Care Act. IRS allocated $311.2 million to ACA for fiscal year 2016. CADE 2 is intended to provide daily processing of taxpayer accounts. A major component of the program is a modernized database for all individual taxpayers that is intended to provide the foundation for more efficient and effective tax administration. In Transition State 2 of the initiative, the modernized database will become IRS’s authoritative source for taxpayer account data, as it begins to address core financial material weakness requirements for individual taxpayer accounts. Existing financial reports will be modified to take into account the increased level of detail and accuracy of data in the database. CADE 2 data will also be made available for access by downstream systems such as the Integrated Data Retrieval System for online transaction processing by IRS customer service representatives. IRS allocated $129.9 million to CADE 2 for fiscal year 2016. RRP is intended to deliver an integrated and unified system that enhances IRS’s capabilities to detect, resolve, and prevent criminal and civil tax noncompliance. In addition, it is intended to allow analysis and support of complex case processing requirements for compliance and criminal investigation programs during prosecution, revenue protection, accounts management, and taxpayer communications processes. IRS allocated $91.7 million to RRP for fiscal year 2016. Over the past several years, we have issued a series of reports which have identified opportunities for IRS to improve the management of its major IT investments. We reported in June 2012 that while IRS reported on the cost and schedule of its major IT investments and provided chief information officer ratings for them, the agency did not have a quantitative measure of scope–a measure that shows functionality delivered. We noted that having such a measure is a good practice as it provides information about whether an investment has delivered the functionality that was paid for. We recommended that IRS develop a quantitative measure of scope, at a minimum for its major IT investments, to have more complete information on the performance of these investments. In December 2015, IRS officials told us that they were exploring options to report scope and proposed an option in a December 2015 quarterly report on IT to Congress. We examined the suitability of proposed solutions for a quantitative measure of scope as part of this review. Further, in April 2013 we reported that the majority of IRS’s major IT investments were reportedly within 10 percent of cost and schedule estimates and eight major IT investments reported significant cost and/or schedule variances. We also reported that weaknesses existed, to varying degrees, in the reliability of reported cost and schedule variances, and key risks and mitigation strategies were identified. As a result, we made recommendations for IRS to improve the reliability of reported cost and schedule information by addressing the identified weaknesses in future updates of estimates. We also recommended that IRS ensure projects consistently follow guidance for updating performance information 60 days after completion of an activity and develop and implement guidance that specifies best practices to consider when determining projected amounts. IRS agreed with three of our four recommendations and partially disagreed with the fourth recommendation. The agency specifically disagreed with the use of earned value management data as a best practice to determine projected cost and schedule amounts, stating that the technique was not part of IRS’s current program management processes and the cost and burden to use it outweighed the value added. While we disagreed with IRS’s view of earned value management because best practices have found that the value generally outweighs the cost and burden of implementing it, we provided it as one of several examples of practices that could be used to determine projected amounts. We also noted that implementing our recommendation would help improve the reliability of reported cost and schedule variance information, and that IRS had flexibility in determining which best practices to use to calculate projected amounts. Finally, our February 2015 report found that most of IRS’s major IT investments reported meeting cost and schedule goals; however, selected investments experienced variances from initial cost, schedule, and scope plans that were not transparent in reports to Congress because IRS had yet to address our prior recommendations. Specifically, IRS had not addressed our recommendation to report on how delivered scope compares to what was planned, and also did not address guidance for determining projected cost and schedule amounts, or the reporting of cumulative cost and schedule performance information. IRS has identified priorities for operations and modernization but does not have a structured process for prioritizing among modernization efforts. Specifically, IRS has developed eight priority groups for operations, such as the delivering essential tax administration and taxpayer services group, and identified eight priority projects for modernization, including CADE 2 and RRP, to help reach IRS’s future state vision. In addition, IRS has developed a structured process for allocating funding to its operations support activities which is consistent with best practices. However, IRS has not fully documented this process. In addition, IRS does not have a similar structured process for prioritizing funding among its modernization activities, stating it does not have such a process because there are fewer competing activities than for operations support. A documented process for both operations support and modernization activities that is consistent with best practices would provide transparency into the process and provide greater assurance it is consistently applied. IRS has identified eight priorities—referred to as repeatable priority groupings—for its operations support activities. Officials told us that these priorities evolved from lessons learned in using priorities established the prior year and noted that they will continue to be refined over time. For example, in fiscal year 2015, activities associated with the tax filing season were identified as IRS’s top priority; however, in fiscal year 2016, IRS decided that infrastructure (i.e., telephones and computer servers) was essential in supporting tax processing and should thus be classified as IRS’s top priority. Each of the priority groupings includes several supporting business activities associated with major and non-major investments that IRS allocates funding to. Examples of such business activities include enterprise video conferencing service, and print support for taxpayer notices. These priorities and information related to these priorities are identified in order of importance, as determined by IRS, in table 3. IRS has also identified eight priority projects for modernization. These projects, as well as their descriptions and associated funding allocations are identified in table 4. According to GAO’s Information Technology Investment Management Framework, an organization should document policies and procedures for selecting new and reselecting ongoing IT investments. These policies and procedures should include criteria for making selection and prioritization decisions. A policy-driven, structured method for reselecting ongoing projects provides the organization’s investment board with a common understanding of how ongoing projects will be reselected for continued funding. In addition, executives funding decisions should be aligned with the selection decision. Specifically, the organization’s executives have discretion in making the final funding decisions on IT proposals. However, their decisions should be based upon the analysis that has taken place in the previous activities. Further, the Office of Management and Budget’s (OMB) Capital Programming Guide requires, among other things, that agencies have a disciplined capital programming process that addresses project prioritization and comparison of assets against one another to create a prioritized portfolio. In 2015, IRS developed and implemented a process known as the Portfolio Investment Planning process to prioritize its operations support activities. This process addresses (1) prioritization and comparison of IT assets against each other and (2) criteria for making selection and prioritization decisions. Further, senior IRS executives stated that the final funding decisions on IT proposals are based on IRS’s prioritization process. IRS uses priority groupings it has defined as criteria for making prioritized selections. Specifically, a consideration in determining if an activity (i.e., request for funding) will be selected is to determine the extent to which it supports any of eight priority groupings. If the activity is found to support one of the eight priorities, it is further assigned one of four priority levels: must do, high, medium, or low. IRS has defined the criteria that must be met in order to classify a funding activity at a particular priority level. Table 5 provides an example of the criteria used to make these decisions for the legislative provisions for the FATCA and ACA priority group. IRS prioritizes and compares IT assets against each other. Specifically, IRS business units identify line item activities for which they are requesting funding. For each activity, business units address, among other things, placement within IRS’s established priorities; proposed high-level capabilities and a cost estimate; a 1-year usable segment; and the date funding is needed and subsequent mitigation strategy if funding is not received by the specified date. Further, several meetings are held to review requested funding activities. According to IRS, the purpose of these meetings is to provide a cross- organization review and evaluation of IT-related demands. Stakeholders include Associate Chief Information Officers, business unit representatives, and staff from IRS’s IT Financial Management Service. Finally, IRS senior executives stated that its final funding decisions on IT proposals are based on IRS’s prioritization process. According to these officials, when the agency receives its appropriation, it evaluates prioritized activities—starting with the highest priority demands—until the total estimate of appropriated funding is allocated. Officials have discussions relative to the items that will not be funded and then engage the Office of the Chief Financial Officer to determine the extent to which user fees and other sources of funding are available to support priorities that exceed the appropriated amount. Prioritized activities, which have been allocated funding for the upcoming fiscal year, are presented to the Chief Technology Officer for approval. IRS’s Senior Executive Team approves the Chief Technology Officer’s funding recommendations and submits the recommendations to the Commissioner and Deputy Commissioners for final funding approval. Despite these strengths, IRS has not fully documented its process for prioritizing operations support activities. Specifically, while several documents describe aspects of the operations support prioritization process, including the criteria used and the meetings to review and evaluate IT related demands, none fully describe the procedures associated with the process. IRS officials stated this is because it is relatively new and not yet stabilized. IRS officials who are stakeholders in this process stated that documentation would have reduced the uncertainty they faced during implementation and would have helped them to better prepare the required data for the process. IRS senior executives stated they plan to fully document this process; however, they did not identify a time frame for when this would be done. Fully documenting IRS’s portfolio investment process for operational activities would help ensure consistent implementation of the process by all stakeholders and provide transparency regarding how such prioritization decisions are made. In contrast with operations support, IRS does not have a structured process for prioritizing funding among its modernization investments. Specifically, IRS officials stated that discussions are held to determine the modernization efforts that are the highest priority to meet IRS’s future state vision and technology roadmap. Officials reported that staffing resources and lifecycle stage are considered but there are no formal criteria for making final determinations. Senior IRS officials stated that they do not have a structured process for selection and prioritization of business systems modernization activities because the projects are set and there are fewer competing activities than for operations support. While there may be fewer competing activities, a structured, albeit simpler, process that is documented and consistent with best practices would provide transparency into IRS’s needs and priorities for appropriated funds. Such a process would better assist Congress and other decision makers in carrying out their oversight responsibilities. Of the six selected investments in our review, two development investments—FATCA and RRP—performed under cost, with varying schedule performance, and delivered most of the scope that was planned; however, performance information for these investments could be improved by implementing best practices for determining actual work performed. For portions of the two other development investments (CADE 2 and ACA) for which performance information was available, IRS reported completing work under planned cost and on time. However, neither investment reported information on planned versus actual delivery of scope, in accordance with best practices. Further, ACA did not report timely information on planned versus actual costs. Finally, one of the two investments in operations and maintenance (MSSS) met all operational performance goals, while the other investment in operations and maintenance (TSS) met six out of eight goals. Best practices highlight the importance of timely reporting on performance relative to cost, schedule, and scope (both planned and actual). According to these practices, one way to measure benefits of development work is to approximate by measuring a project’s actual cost and schedule progression (i.e., evaluating earned value), which is a measure of the amount of planned work that is actually performed in relation to the funds expended. IRS reported metrics for FATCA and RRP, which allowed us to determine these investments’ performance. The agency did not use such metrics or consistently develop planned and actual cost, schedule, and scope information for all CADE 2 and ACA projects and activities that were completed or ongoing during fiscal year 2015 and the first quarter of fiscal year 2016. As a result, we could only determine the performance of portions of these investments. FATCA and RRP: During fiscal year 2015 and the first quarter of fiscal year 2016, IRS reported quarterly cost, schedule, and scope performance information for each of the FATCA and RRP projects it was working on. Specifically, it reported metrics for these investments via its Investment Performance Tool. Table 6 summarizes the performance of the FATCA and RRP investments (see appendix II for detailed analyses). As shown in table 6, FATCA and RRP performed under cost, with varying schedule performance, and delivered most of the scope that was planned. Specifically, IRS was developing 10 projects to support the FATCA investment during fiscal year 2015 and the first quarter of 2016. IRS reported completing work at $12.4 million less than budgeted and delivering 91.7 percent of planned scope with an 8 percent schedule overrun for these projects. IRS stated that the reasons for these variances include, among other things, issues with the requirements management process; an overestimation of costs; and a reduction in the amount of work completed versus what was planned. IRS was developing three projects to support the RRP investment during fiscal year 2015 and the first quarter of 2016. IRS reported completing work at $24.5 million less than budgeted and delivering 99.9 percent of planned scope with a minor schedule overrun for these projects. IRS stated that the reasons for these variances include, among other things, overestimation of costs (including IRS labor) and unplanned work that needed to be completed. While the scope metric used for FATCA and RRP provides an indication of performance, this metric would be more reliable if it incorporated best practices for determining the amount of work completed for all activities. Specifically, IRS uses a level of effort method beyond the amount generally accepted by best practices to determine the amount of work completed by its own staff. Our Cost Estimating and Assessment Guide states that the level of effort method should be used sparingly (15 percent of the budget or less); however, the work performed by IRS staff ranged from 22 to 100 percent of the work completed for the FATCA and RRP projects that were ongoing during the time frame of our review. IRS officials stated that measuring value for government work is a vague concept to pursue. Nevertheless, revising the method for determining the amount of work completed by IRS staff for these investments would improve the reliability of the performance information. CADE 2 and ACA: For the CADE 2 projects that were completed during fiscal year 2015 and the first quarter of fiscal year 2016, IRS reported that CADE 2 performed on time and $1.7 million under planned cost. According to IRS, the positive cost variance for the CADE 2 investment is the result of overestimation of costs and the ability to reuse existing code. For the ACA activities that reported actual costs during fiscal year 2015 and the first quarter of 2016, IRS reported that ACA performed on time and $10.3 million under planned costs. IRS stated that this variance was primarily due to an overestimation of the labor needed to complete the planned work. Table 7 shows the reported cost and schedule performance for CADE 2 and ACA. With respect to CADE 2, IRS does not report timely information on planned versus actual delivery of scope. Specifically, a senior CADE 2 program official stated that, due to the nature of the methodology being used to implement the projects, progress in delivering planned scope cannot be determined until the end—after the testing phase. For CADE 2, projects can be 16 to 60 months long. We requested information from IRS regarding delivery of planned scope for those projects that completed during the time frame of our review; however, IRS was unable to provide this information. Regarding ACA, IRS does not report timely cost or scope information. A senior IRS official stated that the investment is being developed using an iterative approach, the goal of which is to deliver functionality in short increments. However, the agency does not report actual costs for the activities comprising the projects until the activities are completed; this delay in reporting could be as long as 9 months. Instead, ACA calculates a cost projection, which provides an estimate of cost to complete rather than cost of work completed, with which we have previously identified weaknesses. In addition, IRS only provided information on delivery of planned scope for one of the ACA projects it was developing during the timeframe of our review. Reporting of performance for the CADE 2 and ACA investments could be improved by incorporating best practices for timely reporting of cost, schedule, and scope performance information. As a result of the lack of timely and complete performance information, Congress and other external parties do not have pertinent information about CADE 2 and ACA with which to make oversight decisions. According to OMB’s Fiscal Year 2016 Capital Planning Guidance, ongoing performance of operational investments is monitored to ensure the investments are meeting the needs of the agency, delivering expected value, and/or modernized and replaced consistent with the agency’s enterprise architecture. To this end, OMB requires agencies to report on at least five operational metrics for major IT investments and agencies are specifically required to report on planned and actual operational performance. The two operations and maintenance investments in our review reported on operational performance metrics, as required. MSSS met its five operational performance goals during fiscal year 2015; however, TSS consistently underperformed on two of its eight metrics. Table 9 identifies the MSSS operational performance metrics, their descriptions, and the performance against these metrics during fiscal year 2015. IRS reported planned and actual performance for eight operational performance metrics for TSS during fiscal year 2015. However, as previously mentioned, TSS consistently missed operational performance goals for two of the eight metrics. The two TSS metrics that were not met illustrate pervasive challenges meeting its goals in deploying new telecommunications capabilities. Specifically, IRS missed every monthly target in fiscal year 2015 for deploying voice, video, and data technologies. As a result, TSS did not deploy such technologies to approximately 4,300 users that were originally included in the planned deployment. According to IRS officials, the operational performance goals for the two metrics that were not met should have been updated to better reflect the limited funding the agency intended to allocate to these activities. Table 10 identifies the TSS operational performance metrics, their descriptions, and the performance against these metrics during fiscal year 2015. While IRS has developed a process for prioritizing funding for operations support activities that adheres to best practices, it is not fully documented. Further, IRS has not developed a priority setting process for modernization activities for which the agency allocated nearly $300 million to for fiscal year 2016. Until IRS documents its process for operations support activities and develops a process for modernization activities, the agency will lack the transparency needed by Congress and others to assist in carrying out their oversight responsibilities. IRS has developed performance metrics for two investments—FATCA and RRP—which include a measure of progress in delivering scope, a measure we have been reporting on and recommending IRS address since 2012. While these metrics represent an important step, their reliability could be improved by incorporating best practices for measuring the work performed by IRS staff by using the level of effort measure sparingly. In addition, only partial performance information was available for CADE 2 and ACA because IRS did not use the metrics it is positioned to develop for these investments or consistently have cost, schedule, and scope information for these investments. Continued efforts in this area would substantially improve the performance reporting for the CADE 2 and ACA investments, and potentially for all major development efforts. To help IRS improve its process for determining IT funding priorities and to provide timely information on the progress of its investments, we recommend that the Commissioner of IRS direct the Chief Technology Officer to take the following four actions: document IRS’s process for selecting and prioritizing operations establish, document, and implement policies and procedures for selecting new and reselecting ongoing business systems modernization activities, consistent with IRS’s process for prioritizing operations support priorities, which addresses (1) prioritization and comparison of IT assets against each other, (2) criteria for making selection and prioritization decisions, and (3) ensuring IRS executives’ final funding decisions on IT proposals are based on IRS’s prioritization process; modify existing processes for FATCA and RRP for measuring work performed by IRS staff to incorporate best practices, including accounting for actual work performed and using the level of effort measure sparingly; and report on actual costs and scope delivery at least quarterly for CADE 2 and ACA. For these investments, IRS should develop metrics similar to FATCA and RRP. We provided a draft of this product to IRS for comment. In its written comments, reproduced in appendix III, IRS agreed with two recommendations, did not agree nor disagree with one, and disagreed with one. Specifically, IRS agreed with our recommendations to better document its prioritization process for operations support activities and extend that process to its business systems modernization activities. With respect to our recommendation to report on actual costs and scope delivery at least quarterly for CADE 2 and ACA, IRS did not agree nor disagree, but noted that IRS is continuing to try to improve its processes in reporting investment performance. Regarding our recommendation to modify existing processes for FATCA and RRP for measuring work performed by IRS staff to incorporate best practices, including accounting for actual work performed and using the level of effort measure sparingly, IRS disagreed and stated that modifying the use of the level of effort measure would equate to a certified earned value management system, which would add immense burden on IRS’s programs on various fronts and would outweigh the value it provides. However, we did not specify the use of an earned value management system in our report and believe other methods could be used to more reliably measure work performed. As noted in our report, 22 to 100 percent of the work for selected projects was performed by IRS staff. As a result, we believe that it is a reasonable expectation for IRS to reliably determine the actual work completed, as opposed to assuming that work is always completed as planned. Accordingly, we maintain our recommendation is still warranted. We are sending copies of this report to interested congressional committees, the Commissioner of IRS, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) describe the Internal Revenue Service’s (IRS) current information technology (IT) investment priorities and assess IRS’s process for determining these priorities, and (2) determine IRS’s progress in implementing key IT investments. To address our first objective, we reviewed documentation, such as IRS’s fiscal year 2016 Business Systems Modernization Operating Plan, as well as financial reports to determine IRS’s IT funding priorities and funding allocations. In addition, we reviewed artifacts from IRS’s Portfolio Investment Planning process, such as slide decks describing key stages of the process, memorandums distributed to stakeholders, prioritized listings of investment activities, and criteria for establishing priorities, to identify and describe IRS’s process for determining its IT investment priorities. Further, we interviewed officials in IRS’s Office of Strategy and Planning, as well as stakeholders of the Portfolio Investment Planning process from IRS business units. We then analyzed IRS’s processes against best practices in our IT Investment Management Framework and the Office of Management and Budget’s Capital Programming Guide to determine the extent to which the processes met best practices and requirements. Lastly, we met with officials at the Department of the Treasury who are responsible for IT capital planning, including the Treasury Chief Information Officer, to determine the department’s role in IRS’s process for prioritizing IT funding. For our second objective, we analyzed the performance of four key development investments—Customer Account Data Engine 2 (CADE 2), Return Review Program (RRP), Foreign Account Tax Compliance Act (FATCA), and the Affordable Care Act Administration (ACA). Further, we analyzed two key operational investments —Telecommunications Systems and Support (TSS) and Mainframes and Servers Services and Support (MSSS). We chose these investments because they represented IRS’s most significant expenditures on development and operations for fiscal year 2015 ($496.5 million and $777.8 million, respectively). A tailored approach was necessary for analyzing the development investments given the varying types and extent to which performance information was available for these investments. To determine the progress in implementing FATCA and RRP, we compiled and analyzed quarterly output from IRS’s Investment Performance Tool for the period of fiscal year 2015 through the first quarter of 2016. IRS does not consider this tool to be a formal Earned Value Management System. As a result, we did not evaluate the extent to which the tool was compliant with the American National Standards Institute’s guidelines for an Earned Value Management System. For CADE 2, we analyzed IRS’s quarterly reporting of planned and actual costs, as well as requirements reports and schedule reporting. For ACA, we analyzed IRS’s financial reporting via the ACA business case submissions, as well as performance reporting to management and schedule reporting. In addition, we held multiple meetings with IRS officials, including officials in the CADE 2, FATCA, ACA, and RRP program offices. To determine the progress in implementing TSS and MSSS, we reviewed operational performance information reported for the selected investments from October 2014 to September 2015; this information included, where reported, the performance target and actual results for each metric. In addition, we reviewed documentation describing the performance metrics and interviewed IRS officials regarding the process for reporting such metrics. To determine the reliability of data used for our review, we obtained and reviewed IRS’s guidance for its Investment Performance Tool, which identifies, among other things, how data are to be entered within this tool, sources of such data, and explanations of the methods used to calculate performance metrics generated from the tool. Further, we held meetings with officials responsible for overseeing the use of IRS’s Investment Performance Tool. In addition, we relied on extensive work we previously completed on IRS’s financial management system for relevant data used for this review. In determining the reliability of the data supporting this review, we determined that data regarding the delivery of planned scope for the FATCA and RRP investments could be more reliable by incorporating best practices. While these data were sufficiently reliable for our purposes, we made recommendations to improve their reliability. We conducted this performance audit from September 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix illustrates the potential for reporting complete performance information via IRS’s Investment Performance Tool. Specifically, the following tables provide a detailed evaluation of cost, schedule, and scope performance for Return Review Program and Foreign Account Tax Compliance Act projects that were being developed by IRS during fiscal year 2015 and the first quarter of fiscal year 2016. IRS reported working on three projects in support of Return Review Program during fiscal year 2015 and the first quarter of fiscal year 2016. The following tables identify the performance information reported via IRS’s Investment Performance Tool; positive cost variances indicate that the project was performing under planned cost and positive schedule variances indicate that the project was performing ahead of schedule. IRS reported working on 10 projects in support of Foreign Account Tax Compliance Act during fiscal year 2015 and the first quarter of fiscal year 2016. The following tables identify the performance information reported via IRS’s Investment Performance Tool; positive cost variances indicate that the project was performing under planned cost, and positive schedule variances indicate that the project was performing ahead of schedule. In addition to the individual named above, the following staff made key contributions to this report: Sabine Paul (Assistant Director), Bradley Roach (Analyst in Charge), Rebecca Eyler, Charles Hubbard III, Paul Middleton, Karl Seifert, and Marshall Williams, Jr. | IRS relies extensively on IT systems to annually collect more than $2 trillion in taxes, distribute more than $300 billion in refunds, and carry out its mission of providing service to America's taxpayers in meeting their tax obligations. For fiscal year 2016, IRS planned to spend approximately $2.7 billion for IT investments. Given the size and significance of these expenditures, it is important that Congress be provided information on agency funding priorities, the process for determining these priorities, and progress in completing key IT investments. Accordingly, GAO's objectives were to (1) describe IRS's current IT investment priorities and assess IRS's process for determining these priorities, and (2) determine IRS's progress in implementing key IT investments. To do so, GAO analyzed IRS's process for determining its fiscal year 2016 funding priorities, interviewed program officials, and analyzed performance information for six selected investments for fiscal year 2015 and the first quarter of 2016. The Internal Revenue Service (IRS) has developed information technology (IT) investment priorities for fiscal year 2016, which support two types of activities—operations and modernization. For example, it has developed priority groups for operations such as: (1) critical business operations, infrastructure operations, and maintenance; and (2) delivery of essential tax administration/taxpayer services. It has identified priorities for modernization, such as web applications, to help reach IRS's future state vision. However, while IRS has developed a structured process for allocating funding to its operations activities consistent with best practices, it has not fully documented this process. IRS officials stated this is because the process is relatively new and not yet stabilized. In addition, IRS does not have a structured process for its modernization activities, because, according to officials, there are fewer competing activities than for operations activities. Fully documenting a process for both operations support and modernization activities that is consistent with best practices would provide transparency and greater assurance it is consistently applied. Of the six investments GAO reviewed, two investments—Foreign Account Tax Compliance Act and Return Review Program—provided complete and timely performance information for GAO's analyses. These investments performed under cost, with varying schedule performance, and delivered most planned scope (see table). However, IRS did not always use best practices for determining scope delivered. Specifically, IRS used a method inconsistent with best practices for determining the amount of work completed by its own staff. Two other investments reported completing portions of their work on time and $1.7 million under planned costs (for the Customer Account Data Engine 2), and on time and $10.3 million under planned costs (for Affordable Care Act Administration). However, neither investment reported information on planned versus actual delivery of scope in accordance with best practices. The remaining two investments—Mainframes and Servers Services and Support and Telecommunications Systems and Support—generally met performance goals. GAO is recommending that IRS develop and document its processes for prioritizing IT funding and improve the calculation and reporting of investment performance information. IRS agreed with two recommendations regarding its prioritization processes, disagreed with one related to the calculation of performance information, and did not comment on one recommendation. GAO maintains all of the recommendations are warranted. |
Aquatic invasive species can be found in all U.S. states and territories. They can enter and travel in aquatic habitats by several common pathways, including through the discharge of ships’ ballast water; hull fouling, such as barnacle growth, on commercial vessels and recreational boats; and accidental or intentional release of organisms into aquatic habitats through aquaculture, bait, aquaria (fish tanks), or the pet trade. Once established in a particular location, an aquatic invasive species can spread to other locations and ecosystems. Figure 1 is an interactive map of the United States with some examples of aquatic invasive species and their known locations (i.e., reported presence of a species) as well as common pathways of invasion—these examples do not represent all types of aquatic invasive species or pathways, but rather serve as illustrative examples (see app. II for a printable version). Scientists and officials from several federal agencies said that the presence and impacts of aquatic invasive species are, and are likely to continue, growing, such as from the warming of ocean waters and the opening of shipping channels through the Arctic, allowing new species to potentially thrive in habitats previously too cold or inaccessible. The Task Force, created by the 1990 Act, is co-chaired by the U.S. Fish and Wildlife Service (FWS) and NOAA. FWS provides funding for the administration of the Task Force, including conducting annual meetings, publishing Federal Register notices, and supporting an Executive Secretary and other FWS staff that work as regional coordinators. To implement its aquatic invasive species program, the Task Force relies on its 13 member agencies—each of which has a different set of responsibilities related to aquatic invasive species, based on their overall mission and areas of programmatic responsibility (see table 1). These member agencies conduct aquatic invasive species activities and commit resources to achieve the goals of the aquatic invasive species program. According to the Task Force’s 1994 program overview, implementation of the program is a cooperative effort that will build on and fill gaps in existing activities and programs, and individual agencies will implement the program in line with their specific authorities, priorities, expertise, and funding. In addition, the Task Force is advised by six regional panels— consisting of representatives of state, tribal, and nongovernmental organizations, commercial interests, and neighboring countries—that help identify regional priorities and coordinate regional activities. Some funding is provided to each regional panel as well as to state governments and other entities to support implementation of species- or region-specific aquatic invasive species management plans and other activities. Together, these federal, state, and nonfederal agencies and organizations work to prevent and control aquatic invasive species and implement the 1990 Act. Activities to address aquatic invasive species can be categorized using the seven general activity categories developed by the National Invasive Species Council. These categories reflect common activities agencies conduct along the continuum of an invasion of a species, from preventing the arrival or spread of an invading species to controlling or eradicating that species from the ecosystem. Table 2 describes each activity category. Preventing the introduction of aquatic invasive species into ecosystems is generally the most effective means of avoiding their establishment and spread, according to numerous academic reports, as well as the Task Force and several of its member agencies. According to a 2006 study, the difficulties and expense of reversing biological invasions means investment in prevention is likely to be the most successful and cost- effective response to biological invasions. Further, eradication (the elimination of an invading species from the ecosystem) and control (limiting an invasive species to a specific ecosystem) becomes increasingly difficult and costly as a species becomes established and spreads, as shown in figure 2. Task Force member agencies estimated expending an average of about $260 million annually for fiscal years 2012 through 2014 to address aquatic invasive species. Several of the member agencies identified challenges and limitations associated with the expenditure information they provided in response to our questionnaire. As a result, the information reported by Task Force member agencies on annual expenditures through our questionnaire generally reflects the agencies’ best estimates, rather than actual expenditures. Table 3 provides the estimated annual expenditures for each Task Force member agency during fiscal years 2012 through 2014. Based on information reported through our questionnaire, estimated expenditures by Task Force member agencies for fiscal year 2014 ranged from a high of about $149 million by the U.S. Army Corps of Engineers (Corps) to a low of $70,000 by the Bureau of Land Management. Specifically, the Corps reported that the majority of its estimated annual expenditures were for controlling and managing existing aquatic invasive species at multiple projects it manages, and mostly came from the respective project’s operations and maintenance funding. The Bureau of Land Management’s estimates for fiscal year 2014 comprised the annual cost to develop and place aquatic invasive species awareness advertisements in print materials focusing on outdoor activities such as hunting, fishing, and boating. Also, for fiscal year 2014, the Bureau of Land Management reported that it did not have funding to provide to its state offices to coordinate or carry out aquatic invasive species activities in their local areas, as it did in fiscal years 2012 and 2013. Estimates for Task Force member agencies generally reflected a variety of activities undertaken or funded by the respective agency spanning multiple species and regions within their areas of programmatic responsibility. In contrast, estimates for some agencies reflected efforts specific to a particular region or activity. For example, the Environmental Protection Agency (EPA) reported that its estimates mostly reflected expenditures of funding transferred to other agencies to carry out activities in support of the Great Lakes Restoration Initiative—a program launched in 2010 to protect and restore the Great Lakes ecosystem. One of the Initiative’s main focus areas includes prioritizing efforts to prevent the introduction of new invasive species into the Great Lakes. In responding to our questionnaire, several of the Task Force member agencies identified challenges and limitations in collecting information on how much they estimated expending to address aquatic invasive species. These included the following: Expenditures on aquatic invasive species activities are not specifically tracked. Seven of the 13 Task Force member agencies reported that their budget structures and financial accounting systems were not designed to specifically track expenditures on aquatic invasive species activities. For instance, the U.S. Forest Service reported that many aquatic invasive species related activities are conducted throughout the agency, but the agency’s program management and financial accounting systems do not separately track aquatic invasive species expenditures. Specifically, U.S. Forest Service officials said they could not identify the portion of funding expended directly for aquatic invasive species because these activities were often integrated into larger projects—such as inspecting and cleaning equipment used in fighting wildfires. For example, the agency has developed specific protocols to inspect, assess, and decontaminate equipment, such as the inside of a fire pump, to help make sure it is clear of any invasive algae or mussels that may be unintentionally transferred to a new watershed when moving water between areas to fight fires. U.S. Forest Service officials further explained that this is one step of many in cleaning and preparing the equipment for its next use, and its management and financial accounting systems are not set up to capture or break out activities to this level of detail. Similarly, the Bureau of Reclamation reported that expenditures for aquatic invasive species activities at its water projects—such as clearing water control structures to maintain water delivery through pipes and canals—are funded mostly through the operations and maintenance budget for each project and are not tracked as expenditures specific to aquatic invasive species. Decisions on expenditures for aquatic invasive species are made at the local or regional level. Four of the 13 member agencies reported that decisions on expenditures for aquatic invasive species activities are delegated to a regional or local level and are not tracked at the national level. For example, the National Park Service reported that once funding is provided to a national park, headquarters management does not generally direct how the funding is expended at that park. Instead, park management generally determines how the funding will be used to accomplish park objectives, including whether and how to prioritize funding for aquatic invasive species activities. Similarly, the Bureau of Land Management reported that numerous decisions and activities take place at its local or state office level that are not tracked by headquarters, including expenditures on aquatic invasive species, and, therefore, annual expenditures on aquatic invasive species across the agency are unknown. The U.S. Forest Service also reported in its questionnaire that many of its aquatic invasive species activities are conducted through cooperative partnership agreements at the local and regional level and expenditures for these activities are not reported at the national level. Through our questionnaire and interviews with officials from the Task Force and its member agencies, we found that member agencies conducted a wide range of activities and faced several challenges in addressing aquatic invasive species. Most member agencies reported conducting activities across the seven general activity categories developed by the National Invasive Species Council, including taking actions to prevent introductions of new aquatic invasive species and control the spread of existing ones (see app. III). Task Force member agencies also identified several challenges in addressing aquatic invasive species. Some of these challenges are overarching, and others relate to how member agencies plan or conduct aquatic invasive species activities specific to the activity categories. Regarding overarching challenges, several Task Force member agencies—including officials from the Departments of the Interior and Agriculture, the Corps, and NOAA—expressed concern that their activities, though numerous, may not be adequate relative to the growing magnitude and impacts of aquatic invasive species amid decreasing or constrained agency resources. Task Force representatives further said that many of the member agencies have faced competing priorities in carrying out aquatic invasive species-related activities, with some member agencies having limited flexibility to conduct work in multiple areas. According to officials from the U.S. Geological Survey (USGS), for example, much of the agency’s aquatic invasive species activities have been focused on identifying methods to treat and control Asian Carp in accordance with the Great Lakes Restoration Initiative and other funding for this work. USGS officials said that though their work on Asian Carp has been critical, it has sometimes meant that they have not been able to prioritize other needs, such as identifying marine invaders from nonballast water sources or new marine and arctic threats given the warming of ocean waters. The following are examples of activities Task Force member agencies conducted to address aquatic invasive species along with challenges they identified related to specific activity categories, based on the responses we received to our questionnaire and interviews with officials from the Task Force and its member agencies. These examples include activities from each of the seven activity categories—(1) prevention, (2) early detection and rapid response, (3) control and management, (4) restoration, (5) research, (6) education and public awareness, and (7) leadership and international cooperation. These examples do not represent all activities conducted or challenges identified by member agencies, but rather they illustrate the nature and type of activities and challenges discussed. Eleven of the 13 Task Force member agencies reported conducting a range of prevention activities, often related to managing specific pathways to help prevent the introduction of aquatic invasive species into new aquatic habitats. Task Force member agencies repeatedly highlighted the importance of conducting prevention-oriented activities as a cost-effective means of addressing aquatic invasive species. Officials from some member agencies also said that they would like to conduct more prevention-oriented activities, but that they have faced challenges in doing so, in part because of policy or funding decisions within their respective agencies. For example, Corps officials said they believed that it would be most cost-effective to treat certain aquatic invasive plants upstream from project boundaries before the species spreads downstream and potentially threatens project infrastructure; however, it is generally the agency’s policy to treat areas within rather than outside project boundaries. Some Task Force member agencies also told us that prevention activities cannot be conducted at the expense of activities aimed at controlling aquatic invasive species already established, and that a more balanced approach between prevention and control activities may be warranted. Prevention Efforts to Control the Spread of Quagga and Zebra Mussels Several Task Force member agencies are involved in activities to prevent the spread of invasive Quagga and Zebra Mussels throughout the western United States. For example, U.S. Fish and Wildlife Service (FWS) and the National Park Service support implementation of the Quagga-Zebra Mussel Action Plan, which was developed by several state and federal agencies, as well as nongovernmental organizations in the western United States. This plan serves as a road map for identifying and prioritizing specific actions needed to prevent the further spread of Quagga and Zebra Mussels, respond to new infestations, and manage existing ones. FWS has installed signs at National Wildlife Refuges to alert boaters about the risk of these species and has funded training in 18 states on inspecting boats and other watercraft to identify and remove the mussels. The National Park Service expended approximately $2 million in fiscal year 2014 on mussel prevention and control and monitoring at nine western parks. In addition, the Bureau of Reclamation has conducted a series of public education and outreach efforts, including the dissemination of informational pamphlets at boat shows, designed to educate the public on practices they can follow to help prevent the spread of Quagga and Zebra Mussels. Examples of prevention activities include the following: Regulations. The U.S. Coast Guard and EPA regulate the management of ballast water—a primary pathway for the introduction of new aquatic invasive species into and within the United States— and other vessel discharges into waters of the United States. In 2012, the Coast Guard updated its ballast water regulations to include a standard for the allowable concentrations of living organisms allowed in a vessel’s ballast water discharged in waters of the United States. In 2013, EPA issued a general permit that contains numeric technology-based limitations on acceptable concentrations of living organisms in ballast water discharge. Inspections. FWS’s Office of Law Enforcement inspects certain wildlife shipments to help ensure that prohibited species, including certain aquatic invasive species, do not enter the country. FWS’s has about 120 inspectors at 49 ports of entry nationwide that review import documentation and conduct visual inspections of some shipments to help prevent species listed as injurious wildlife under the Lacey Act from being illegally brought into the country or across state lines. Physical barriers. The Corps operates a series of electric barriers in the Chicago Area Waterway System located approximately 25 miles from Lake Michigan to prevent the entry of Asian Carp and other aquatic invasive species from the Mississippi River Basin into the Great Lakes. These barriers send out pulses to form an electric field in the water that discourages fish from crossing. Ten of the 13 Task Force member agencies reported conducting early detection and rapid response activities—activities to detect the presence of aquatic invasive species in an area and remove any newly detected species while they are localized and before they become established and spread to new areas. Aside from preventing introductions, the most cost- effective way to address an invasive species is to detect and respond to invasions early, according to documents from the U.S. Forest Service and NOAA. However, coordinated rapid response efforts have been challenging to implement due, in part, to constraints in existing funding, according to officials from some agencies. Consequently, 11 Task Force member agencies are part of a federal work group, co-led by the Department of the Interior and the National Invasive Species Council, that in January 2015 started developing a framework for a national early detection and rapid response program and a plan for an emergency rapid response fund. The work group reported in July 2015 that it plans to issue a report of recommendations to implement an early detection and rapid response framework, including mechanisms for funding, to the White House and the Council on Climate Preparedness and Resilience in the fall of 2015. Early Detection Technique Using Environmental DNA Detection methods such as the use of environmental DNA have become widespread among Task Force member agencies, such as the U.S. Geological Survey (USGS), the U.S. Army Corps of Engineers (Corps), the National Park Service, the Bureau of Reclamation, and the U.S Fish and Wildlife Service (FWS). Environmental DNA—genetic material shed into the environment by organisms that can be detected in samples of air, water, or soil—is a relatively new tool being used to detect invasive species, particularly in areas where the species is not abundant or is difficult to detect. For example, because they are well camouflaged in the environment, visual detection of Burmese Pythons in South Florida is difficult, with detection rates of less than 1%. Use of environmental DNA methods, however, can increase python detection rates to more than 90%, according to USGS officials. Since spring 2015, USGS researchers have been working with FWS to test water from the Loxahatchee National Wildlife Refuge in Florida to determine whether Burmese Pythons may have spread to the refuge. Although environmental DNA helps confirm the presence of an aquatic invasive species in an area, it neither confirms whether the species has become established in the area, nor does it provide information on the number or current location of any species detected. Examples of early detection and rapid response activities include the following: National early detection database. The USGS maintains the Nonindigenous Aquatic Species Database, a publicly accessible database, to track information on the locations of aquatic invasive animals throughout the United States. Federal agencies, as well as state and local agencies and the public, can report aquatic invasive species sightings and when verified, the sightings are added to the database and updated daily by the USGS. Rapid response strike teams. The FWS has five regional strike teams in place to help eradicate any new invasions as soon as possible after they are detected in the nation’s 563 wildlife refuges. These strike teams survey a small portion of the acreage within national wildlife refuges when new invasions are suspected, according to FWS officials, to determine the presence of any invasions and then take actions to eradicate or contain confirmed invasions before populations spread. Eleven of the 13 Task Force member agencies reported conducting activities designed to lessen and mitigate the impact or spread of aquatic invasive species on the facilities or areas they manage. Such activities may be designed to eradicate an invading species, but where eradication is not deemed feasible, such activities are designed to manage the invader by controlling the impact of the species and its spread. Activities aimed at controlling or managing the impact and spread of invasions represent a substantial portion of overall aquatic invasive species-related activities conducted, in terms of both effort and funding, according to Task Force representatives and officials from several member agencies. Some of these officials stressed the importance of sustaining efforts to control and manage aquatic invasive species to avoid reintroductions or spread of the species. For example, Corps officials said that, after eliminating infestations of Melaleuca, an invasive wetland tree, over a prescribed 10- year treatment period, periodic treatments would still be necessary to ensure new populations do not become established. Officials from several member agencies including the Corps noted, however, that limited or inconsistent funding has, at times, made it challenging to consistently manage areas as prescribed—potentially leading to the reemergence of aquatic invasive species. Multipronged Method to Control and Manage Melaleuca Melaleuca, an Australian tree that has destroyed many southern Florida wetlands, can be managed through a combination of biological, chemical, and physical and mechanical controls. For instance, through the introduction of weevils, a type of beetle that serves as a biological control, Melaleuca can be controlled. Researchers from the U.S. Department of Agriculture said, however, that the ability of Melaleuca trees to grow in various water depths has prevented the weevils— which require ground to burrow in—from successfully reproducing and eating the Melaleuca in swampy areas. According to National Park Service officials, Melaleuca can also be controlled if it is consistently treated over a 10-year period using the method in which the trees are first cut or hacked down with a machete or mechanical device and then sprayed with herbicides designed to kill them on the first, second, fourth, seventh, and tenth years of treatment. If this process is not followed as prescribed, however, the trees may regrow and spread. The National Park Service Exotic Plant Management Team and Everglades National Park have contributed to control of Melaleuca in South Florida, as shown in the photo below. Examples of control and management activities include the following: Biological controls. To control and manage the spread of Alligatorweed, a leafy aquatic invasive plant found in the southeastern United States and California, officials from the Corps told us they are using a beetle that feeds and reproduces only on Alligatorweed. According to officials from the Corps and the U.S. Department of Agriculture, the beetle has been successful in controlling the weed, and the need for additional treatments, such as herbicide applications, has been nearly eliminated in Florida. Chemical controls. The Department of State, through the Great Lakes Fishery Commission, along with the Corps, FWS, USGS and other federal and state partners, are primarily using chemicals called lampricides to kill Sea Lamprey, an invasive fish, in their larval stage before they can attach and prey upon native fish. According to Department of State officials, as of 2015, chemical controls have led to a 90 percent reduction in the Sea Lamprey population over its historical high level. Physical and mechanical controls. The Bureau of Reclamation uses physical and mechanical control methods to remove Water Hyacinth, an aquatic invasive plant, from one of its California facilities. Bureau of Reclamation officials said that, if left untouched, Water Hyacinth clogs canals, pumps, and fish screens, which can kill the fish they are working to protect. Bureau of Reclamation officials told us that, between 2013 and 2015, they removed between 10,000 and 20,000 truckloads of Water Hyacinth from the area surrounding the facility—with a dump truck filled with Water Hyacinth leaving the facility every 5 minutes during the height of its growing season. Ten of the 13 Task Force member agencies reported conducting a variety of activities to restore aquatic habitats adversely affected by aquatic invasive species. Officials from a few Task Force member agencies said that it may be possible to begin restoring habitats or ecosystems while control and management activities are under way, but in some cases aquatic invasive species may need to first be controlled or contained. According to a few member agencies, this creates a challenge in that restoration activities must wait until control activities are finished, meaning that restoration may be delayed. Examples of restoration activities include the following: Habitat restoration. NOAA reported providing funding and technical expertise for community-based habitat restoration projects, such as providing about $925,000 in 2012 for the Lower Black River Habitat Restoration Project in Ohio. The goal of this project is to restore fish and wildlife habitat in the lower Black River through actions such as the removal of aquatic invasive plants by chemical and manual techniques followed by the planting of native shrubs. Native fish restoration. The National Park Service reported removing nonnative fish from waters in a number of parks to restore native species and enhance natural aquatic biodiversity. Officials told us that they have been expending about $1 million per year since 2013 at Yellowstone National Park on lake trout removal efforts in Yellowstone Lake. These efforts include contracting with commercial fishing crews to remove invasive lake trout that have caused a significant decline in populations of the native Yellowstone Cutthroat Trout. All 13 Task Force member agencies reported conducting or sponsoring research designed to support activities to help prevent, detect, or control the impacts or spread of aquatic invasive species, as well as determine their impacts on aquatic habitats. Research is critical to identify effective techniques for prevention, detection, control, and management of aquatic invasive species and to help clarify and quantify the effects aquatic invasive species have on native species and habitats, as well as economic costs and impacts to human health, according to Task Force documents. Officials from several member agencies and Task Force representatives noted that significant gaps in knowledge in certain areas related to aquatic invasive species is a challenge and, therefore, would like to see additional research, such as a comprehensive study to identify and assess the environmental impacts and economic costs associated with invasive species in the United States. Such information is critical to understanding the magnitude of the impacts from aquatic invasive species and for obtaining funding to address problems they are causing, according to these officials. In addition, limits in scientific knowledge about newly introduced species and the levels at which they may become established or harmful, especially in ballast water, affect member agencies’ ability to manage the ballast water pathway, according to officials from NOAA and the Smithsonian Environmental Research Center. Officials from the U.S. Coast Guard said that it is difficult to set regulations or establish allowable concentrations of organisms that can be safely released in ballast water when the threshold for establishment of a new potentially invasive species may not be well understood. Federal Research on Hydrilla Federal research on Hydrilla, a submerged invasive plant that has clogged navigation channels and other water systems across the United States, involves efforts by several Task Force member agencies. For example, the U.S. Army Corps of Engineers (Corps) conducted research on the biology of Hydrilla during 2015 to provide a better understanding of the invasion ecology of this species in northern rivers and glacial lakes. The Corps has also researched chemical treatments and application strategies to control or alter the reproduction of Hydrilla. Chemical treatments developed through research have been successful in controlling some strains of Hydrilla, according to Corps officials. Aquatic herbicides developed through research have also been successful in controlling Hydrilla, but some strains have become resistant. In addition, the Animal and Plant Health Inspection Service, in collaboration with the Corps, is researching biological controls for Hydrilla, such as releasing insects that will eat the plant. Examples of research activities include the following: Species research. The Corps is researching various types of invasive aquatic vegetation and options for managing such species through its Aquatic Plant Control Research Program, which is authorized by statute. In 2014, Corps’ researchers completed field studies in Montana that used selective management strategies to control Eurasian Watermilfoil, a plant that is invasive throughout most states, including Alaska. Impacts research. Officials from USGS and NOAA have conducted research aimed at improving scientific knowledge about how aquatic invasive species may be adversely affecting ecosystems. In 2015, USGS continued research to identify whether newly established nonnative species may warrant being considered “high priority invaders,” such as the Burmese Python in the Everglades. Since 2009, NOAA has conducted research to determine how certain aquatic invasive species have affected endangered salmon feeding behavior and habitat in the Pacific Northwest as part of its effort to understand the impacts that aquatic invasive species have on these native species and the ecosystems upon which they depend. Pathways research. The Maritime Administration sponsors the operation of three research facilities—in California, Maryland, and Wisconsin—that are testing the capability of treatment systems for ballast water to determine whether those systems may be approved by the U.S. Coast Guard pursuant to its ballast water regulations. Eleven of the 13 Task Force member agencies reported engaging in education and public awareness activities to increase awareness about aquatic invasive species and their impacts and help minimize or prevent further introductions. According to Task Force documents, the lack of public awareness about the impacts and threats posed by some invasive species and how they are introduced is a substantial challenge for Task Force member agencies in addressing aquatic invasive species. Lionfish Education and Public Awareness Several Task Force member agencies are involved in raising awareness about Lionfish, a highly invasive fish that has spread throughout coastal waters of the southeast and the Caribbean. To help raise awareness, the National Oceanic and Atmospheric Administration, along with nonprofit partners, has sponsored numerous Lionfish derbies since 2010, including 10 public tournaments in 2014 in which divers could hunt the edible fish with spears. The National Park Service produced a Lionfish Response Plan in 2012 that aims to help inform the public about the Lionfish invasion and prevent and mitigate impacts to parks. Biscayne National Park, in Florida, conducts an education program in which Lionfish removed from the park are sent to classrooms for safe dissection by students. National Park Service officials told us that concentrated education efforts like this have been effective in educating the public about Lionfish. In addition, the Department of State provided funding to work with partners in the Gulf of Mexico and the Caribbean to launch a web portal that provides managers and the public with access to the latest information on Lionfish and impacts in the Atlantic Ocean. Examples of education and public awareness activities include the following: National awareness campaigns. The Task Force, Bureau of Land Management, FWS, U.S. Forest Service, and the U.S. Coast Guard are among the federal agencies that collaborate on the “Stop Aquatic Hitchhikers!” campaign. Since 2002, this multimedia campaign has used television, billboards, and social and print media to encourage users of outdoor recreational areas to help stop the transport and spread of aquatic invasive species by, for example, making sure they clean, drain, and dry their boats and boat trailers before transporting them to different aquatic areas. Local awareness events. The National Park Service, along with state agencies and nongovernmental organizations, hosted the inaugural 5K “Race Against Invasives” run through Everglades National Park in February 2015 to raise awareness about invasive species, especially those in Florida. Ten of the 13 Task Force member agencies have been involved in activities to provide leadership to the aquatic invasive species community—which includes federal and nonfederal as well as international agencies working on aquatic invasive species issues—and to enhance cooperation and collaboration, such as by participating and serving as members in a range of international, national, regional, state, and local task forces, councils, and other entities. Given the often complex and widespread nature of aquatic invasive species, working across jurisdictional boundaries is the most effective approach to combating aquatic invasive species, according to Task Force officials and documents. Moreover, working with other federal and nonfederal agencies and organizations helps the Task Force to identify areas where legislation may be needed to fill gaps in statutory authority, suggest priority policy issues, and define roles and responsibilities for managing aquatic invasive species, according to Task Force documents. Officials from the regional panels told us, however, that one challenge in such work is that constrained agency funding has meant that they have not been able to consistently attend Task Force, regional panel, or other cooperative meetings. Examples of leadership and international cooperation activities include the following: Aquatic Nuisance Species Task Force activities. The Task Force conducts semiannual meetings that provide an open and public forum for members to exchange information and coordinate their aquatic invasive species activities. For example, the Task Force’s May 2015 meeting included presentations on a wide range of topics, from the adoption of species-specific national management plans to recommendations from its regional panels on issues of local significance. International cooperation. Officials from the Corps and the U.S. Department of Agriculture have collaborated with scientists in China, South Korea, and Switzerland to identify and develop insect biological control agents to target invasive aquatic plants such as Hydrilla and Eurasian Watermilfoil. For example, in fiscal year 2014, Corps officials reported expending about $450,000 on developing such control agents, which included collecting 350 plant samples from more than 90 field sites to help match invasive plants located in the United States with their countries of origin to improve the success of identifying insects to control these species. The Task Force has not taken key steps to measure progress in achieving the goals laid out in its 2013-2017 strategic plan. In 2012, the Task Force developed its 2013-2017 strategic plan, which serves to guide Task Force member agencies in conducting aquatic invasive species- related activities to implement the aquatic invasive species program. The strategic plan identifies eight goals for the program—which generally align with the seven activity categories developed by the National Invasive Species Council—as well as a number of targeted action items for Task Force member agencies to achieve these goals (see table 4). The action items identified in the strategic plan were intended to be completed over the 5-year period of the plan, but the strategic plan also stated that accomplishing the items would be dependent upon the budgets of individual agencies. The strategic plan did not identify or describe roles or activities to be conducted by specific member agencies or measures to track progress in achieving its eight strategic goals. Rather, the strategic plan called for the Task Force to develop an operational plan to specify how Task Force member agencies would put the strategic plan into operation. According to the strategic plan, the function of the operational plan was to ensure the strategic goals were measurable and accountable. Specifically, the operational plan was intended to contain the following elements: (1) a description of short-term efforts to support and implement the strategic plan and its goals; (2) the roles of Task Force member agencies; (3) when available, the time frames, lead agencies or groups, and funding; and (4) regular updates with its actions reported annually to measure progress toward accomplishing the goals of the strategic plan. The elements envisioned for the operational plan are also largely required by the 1990 Act. Before the strategic plan went into effect, however, the Task Force decided not to develop an operational plan as envisioned in the strategic plan. Instead, the Task Force decided to develop a reporting matrix in the form of a spreadsheet to collect information on member agencies’ aquatic invasive species-related activities, according to the Task Force’s autumn 2012 meeting minutes. This reporting matrix was designed to collect information on the aquatic invasive species activities that member agencies had planned to conduct related to the goals of the strategic plan. This reporting matrix was also designed to collect funding information associated with each of these activities, which could serve as a starting point for the Task Force to identify funding gaps and priorities and develop recommendations for funding to implement elements of its aquatic invasive species program as required by the 1990 Act. The reporting matrix was disseminated to Task Force member agencies in August 2012, but fewer than half (6 of 13) of the Task Force member agencies provided information to the Task Force. According to Task Force representatives, the Task Force did not disseminate or collect additional information using the reporting matrix after 2012. According to Task Force representatives, the Task Force decided not to develop an operational plan or use the reporting matrix after 2012 because of constrained funding and limited resources. In particular, they said they were limited in their efforts because of the constrained funding environment that emerged from sequestration in 2013 and 2014. According to Task Force representatives, the retirement in 2013 and the continued vacancy of its Executive Secretary has resulted in the Task Force being without dedicated staff to support updates to the reporting matrix. Task Force representatives further explained that, given the limited staff devoted directly to the Task Force, they rely on staff from member agencies to contribute to the administration of the program, but member agencies have had competing priorities and have not had the resources to contribute to developing an operational plan in the way that was originally envisioned when the strategic plan was developed. In addition, Task Force representatives said that, since 2014, the Task Force along with member agency staff, has been focused on drafting a report to Congress, an annual requirement under the 1990 Act. Since its inception, the Task Force has provided one report to Congress, in 2004. Task Force representatives said they expect to finalize and issue their draft report by the end of 2015. In reviewing a draft of the report, we found that the draft provided an overview and examples of aquatic invasive species activities conducted by the Task Force, member agencies, regional panels, and states since the Task Force’s 2004 report, as well as some information on the role of Task Force member agencies in aquatic invasive species management. After they finalize the 2015 report, Task Force representatives have not indicated that they would begin submitting reports annually to meet this reporting requirement in the future. Task Force representatives also said they have no plans to develop an operational plan, as called for in the strategic plan, but acknowledged the importance of developing a means to regularly track various member agencies’ aquatic invasive species activities and measure progress toward meeting the strategic goals. Specifically, in response to our inquiry into the status of an operational plan, Task Force representatives told us in May 2015 that they planned to discuss the possibility of reviving or modifying the reporting matrix they had used in 2012. Task Force representatives subsequently told us that, during a June 2015 meeting, member agencies agreed that a tracking mechanism was important. However, they also told us that they did not determine what such a mechanism would look like, how it would be implemented and by whom, or how to address concerns expressed by some member agencies that the mechanism not burden agency staff already working at capacity in light of constrained funding. Task Force representatives said they plan to further discuss the idea of reviving or modifying the reporting matrix at their next semiannual Task Force meeting in November 2015. But, representatives could not tell us when they planned to make a decision on the approach they would take or provide specifics on what information they would collect or how they would measure progress in achieving their strategic goals. By developing and regularly using a tracking mechanism—that would include the elements envisioned for an operational plan and required by the 1990 Act—the Task Force could better position itself to (1) measure progress in achieving its strategic goals and (2) comply with certain requirements in the 1990 Act for the aquatic invasive species program. Addressing aquatic invasive species is a complex, interdisciplinary issue with the potential to affect many sectors and levels of government operations. Strategic planning is a way to respond to this governmentwide problem on a governmentwide scale. Our past work on crosscutting issues has found that governmentwide strategic planning can integrate activities that span a wide array of federal, state, and local entities, as well as provide a comprehensive framework for making resource decisions and holding agencies accountable for achieving strategic goals. With its strategic plan, the Task Force has a framework in place to guide and integrate the numerous and varied aquatic invasive species activities spanning many member agencies. In addition to measuring progress in achieving the Task Force’s strategic goals, developing and regularly using a tracking mechanism could also help the Task Force meet the 1990 Act’s requirements to describe its members’ roles and specific activities and to report annually to Congress on the program’s progress. Aquatic invasive species, a serious and growing problem affecting all states and U.S. territories, have been likened to a never-ending oil spill, given that they are notoriously difficult to eradicate once they become established. Though hard to calculate, the economic and ecological harm caused by aquatic invasive species is vast. Capturing how much federal agencies have expended—and will likely need to expend—to effectively address aquatic invasive species is also challenging. Consequently, it is not possible to identify how much may be needed to fully address aquatic invasive species, both in terms of current invasions or measures to prevent future invasions. Capturing how much progress federal agencies have made in combatting aquatic invasive species is similarly challenging. The Task Force and its member agencies have taken significant steps— including conducting a wide array of activities and developing a strategic plan to guide their efforts—to address the threats and impacts of aquatic invasive species. However, the Task Force has not met several of the 1990 Act’s requirements, including reporting annually to Congress on the program’s progress, or developed a mechanism to ensure its strategic goals are measurable and accountable, such as through an operational plan, as called for in its strategic plan, because of constrained funding and limited resources. Task Force member agencies agreed that a mechanism to track activities and measure progress was important, but the Task Force has not decided what the mechanism would look like, how it would be implemented and by whom, or how to address concerns that it not burden agency staff already working at capacity. Developing and regularly using a tracking mechanism could help the Task Force measure progress in achieving its strategic goals, as well as help the Task Force meet the 1990 Act’s requirements to describe its members’ roles and specific activities and to report annually to Congress on the program’s progress. Moreover, such a mechanism could provide a starting point for identifying funding gaps and priorities, better positioning the Task Force to meet the 1990 Act’s requirement to include recommendations for funding to implement elements of its aquatic invasive species program. As the Aquatic Nuisance Species Task Force considers how to measure progress toward accomplishing its strategic goals, we recommend that the Task Force develop and regularly use a tracking mechanism, to include elements envisioned for an operational plan and to largely meet requirements in the 1990 Act, including: specifying the roles of member agencies related to its strategic plan, tracking activities to be conducted by collecting information on those activities and associated funding, measuring progress member agencies have made in achieving its reporting to Congress annually on the progress of its program. We provided the Secretaries of Agriculture, Commerce, Defense, Homeland Security, Interior, State, and Transportation and the Administrator of the EPA a draft of this report for their review and comment. Only the Department of the Interior and the Department of Commerce’s NOAA provided written comments, which are included in appendixes V and VI, respectively. Interior generally agreed with the report’s findings and recommendation, and NOAA disagreed, as further discussed below. The Department of Defense’s U.S. Army Corps of Engineers, the Department of State, and EPA indicated that they had no comments on our report through e-mail communications provided through departmental audit liaisons on October 19, October 21, and October 23, 2015, respectively. We also received e-mails provided through audit liaisons from the following departments that stated that the departments agreed with the report’s findings and recommendation and had no other comments: The Department of Agriculture’s Animal and Plant Health Inspection Service and U.S. Forest Service (dated October 29, and October 30, 2015, respectively); the Department of Transportation (dated October 26, 2015); and the Department of Homeland Security (dated October 15, 2015). In its written comments, the Department of the Interior stated that it generally agreed with the findings of our report and concurred with our recommendation. Interior stated that it appreciated our review of the challenges faced by the Task Force in addressing and managing risks posed by the introduction and proliferation of aquatic invasive species. Interior stated that the Task Force, of which its FWS is a co-chair, is currently evaluating the reporting matrix to improve its utility as a tracking mechanism. Additionally, Interior stated that, at its November 2015 meeting, the Task Force agreed to track accomplishments using a modified activity tracking tool while its members continue to evaluate how best to track their activities going forward. Interior also stated that the Task Force’s report to Congress is undergoing final agency review, and it is expected to be delivered to Congress in the coming months, which, together with its tracking efforts, will help provide the Task Force with a mechanism to both measure and communicate progress toward its strategic goals, as called for in our report. We agree that using a modified activity tracking tool and completing the report to Congress will be positive first steps in the Task Force’s measuring progress toward accomplishing its strategic goals and meeting requirements in the 1990 Act, in accordance with our recommendation. Interior also provided technical comments, which we incorporated, as appropriate. In its written comments, NOAA disagreed with several aspects of our findings, conclusions, and recommendation. In addition, NOAA stated that our report did not sufficiently address certain aspects of the mandate to conduct the review contained in section 1039(a)(2) of the Water Resources Reform and Development Act of 2014. First, NOAA stated that the report did not mention future costs to mitigate the impacts of aquatic invasive species and that, although it may be difficult to give specific numbers, some information could be speculated upon. In the opening paragraph of our report, we state that the impacts of invasive species in the United States are widespread and expected to increase, with profound consequences for the economy and the environment. We cite a 2005 academic study—the most recent comprehensive study of its kind— that estimates the environmental impacts and economic costs associated with invasive species at almost $120 billion per year. Additionally, through our questionnaire, we requested that federal member agencies provide planned activities and estimated expenditures for future years. However, as we describe in the scope and methodology appendix (app. I) of our report, we decided not to report future estimated expenditures given the limited information provided by some member agencies. We believe that reporting partial information could be misleading and could underestimate likely future expenditures. Second, NOAA stated that our analysis could have gone into more detail about current federal spending on prevention activities. We limited our reporting of expenditures for fiscal years 2012 through 2014 to estimates of total annual expenditures for each Task Force member agency because many member agencies reported that they could not provide estimates of their expenditures by activity category, including prevention. Third, NOAA stated that we did not address whether federal spending is adequate for the maintenance and protection of services provided by federal facilities. As we note in our report, capturing how much federal agencies have expended—and will likely need to expend—to effectively address aquatic invasive species is challenging. Given the limited information available from the Task Force member agencies on current and planned expenditures related to aquatic invasive species, we determined we would not be able to reliably conduct an analysis of the adequacy of federal spending. Lastly, NOAA stated that we chose to focus on the Aquatic Nuisance Species Task Force and its strategic plan rather than documenting other legislative and programmatic efforts that target the prevention, control, and management of aquatic invasive species. The scope of our review includes all federal member agencies of the Task Force, and in discussing activities and challenges those member agencies face in addressing aquatic invasive species, our report highlights many of the legislative and programmatic efforts those agencies are undertaking, such as efforts by the U.S. Coast Guard and EPA to regulate and manage ballast water through updated regulations. NOAA also stated that our report did not mention federal mandates intended to address aquatic invasive species other than the Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990, the National Invasive Species Act of 1996, and Executive Order 13112. NOAA stated that at its exit conference with us on July 14, 2015, it noted that many federal agencies receive additional directions or mandates to address or respond to aquatic invasive species and their impacts and that each agency must balance these mandates. We agree that federal agencies may have multiple responsibilities in addressing aquatic invasive species—we outline many of these responsibilities in table 1 of the background of our report where we describe the key roles and responsibilities of Task Force member agencies under various federal laws. Also, in describing examples of the activities and challenges member agencies face in addressing aquatic invasive species in the second objective of our report, we identify and describe many of the requirements and mandates member agencies must follow. For example, we describe efforts of the FWS’ Office of Law Enforcement to enforce the Lacey Act, which prohibits the importation and interstate transport of wildlife listed as injurious, among other things. NOAA also stated that balancing and responding to various requirements ultimately affects the agencies’ ability to adequately respond to this national issue. We agree with this statement, and in our discussion of challenges faced by member agencies in addressing aquatic invasive species, we report that many of the member agencies have faced competing priorities in carrying out aquatic invasive species-related activities, with some member agencies having limited flexibility to conduct work in multiple areas. In addition, NOAA stated that the interactive map (fig. 1) may be misleading, inaccurate, or confusing. First, NOAA stated that the reported presence of a species in USGS’ Nonindigenous Aquatic Species database (one of two key sources we used to prepare species’ location information for the map) does not mean that the species is established in a particular state’s waters as the map portrays. In our draft report, in a note to the figure, we included a statement to clarify that species distributions in the map represent the reported presence of a species in at least one, but not necessarily all, bodies of water in the state, and do not necessarily indicate establishment of the species in any part of the state. To further clarify this point so as not to potentially mislead readers, in response to NOAA’s comment, we have updated the figure title and note and also added a statement to this effect in the body of the report. Second, NOAA stated that Caulerpa, one aquatic invasive species we highlighted in the figure, had been eradicated. Upon receipt of this information from NOAA and in light of obtaining additional supporting data, we removed Caulerpa from the figure. Third, NOAA stated that providing points of pathways of invasion as part of the interactive figure was confusing or inaccurate in some cases. We agree that the manner in which we linked our description of the pathways of invasion to the map in the draft report could be misinterpreted; consequently, in response to NOAA’s comment, we disassociated the description of pathways from the map. We believe that providing a description of various pathways aquatic invasive species may use to enter and spread into new areas is important context for our report. Furthermore, concerning our recommendation that the Task Force develop and regularly use a tracking mechanism, to include elements envisioned for an operational plan and to largely meet requirements in the 1990 Act, NOAA stated that it does not believe the recommendation can address problems faced by the Task Force. NOAA stated that, with respect to measuring progress, the Task Force agreed to use an activity matrix to compile information, but the matrix has not been updated since 2012 for several reasons, including because of uncertainties in funding, shifting priorities, and the loss of the Task Force Executive Secretary position, which has not been filled since the former Executive Secretary retired in 2013. NOAA further stated that the report does not address the underlying causes that have hindered Task Force efforts to track progress, including the limited budget under which the Task Force operates, which has been reduced significantly in recent years. Our recommendation was not intended to comprehensively address the problems faced by the Task Force, but rather was more narrowly focused. Specifically, the intent of our recommendation is to help the Task Force regularly track progress toward achieving its strategic goals in a manner that ensures it also largely meets requirements in the 1990 Act, such as reporting to Congress annually on the progress of its program. In our report, we discuss the constrained funding environment and limited resources the Task Force and its member agencies reported working under, including having limited staff devoted directly to the Task Force and facing the constrained funding environment that emerged from sequestration in 2013 and 2014. We believe that by implementing our recommendation—that is, by developing and regularly using a tracking mechanism to include the roles of member agencies, activities conducted and associated funding, and progress made in achieving strategic goals—the Task Force would be in a better position to identify and communicate its progress, as well as funding or resource needs to address problems faced by the Task Force. As we note in our report, capturing how much federal agencies have expended—and will likely need to expend—to effectively address aquatic invasive species is challenging. But by developing and regularly using a tracking mechanism, we believe the Task Force would be better-positioned to assess funding gaps and priorities and begin to identify solutions to address the challenges member agencies face in addressing aquatic invasive species. Finally, NOAA identified examples where it stated information portrayed in our report could have evolved into recommendations. For example, NOAA commented that a recommendation that calls for a more balanced approach in conducting prevention activities would be beneficial. In our report, we state that member agencies repeatedly highlighted the importance of conducting prevention-oriented activities as a cost-effective means of addressing aquatic invasive species. We also note that officials from some member agencies said they would like to conduct more prevention-oriented activities, but that prevention activities cannot be conducted at the expense of activities aimed at controlling aquatic invasive species already established, and that a more balanced approach between prevention and control activities may be warranted. We include this and the other examples NOAA references in our report to provide context on an issue, provide examples of activities being undertaken by member agencies, or describe challenges faced by member agencies in addressing aquatic invasive species—consistent with the objectives and scope of work conducted for this review. Consistent with government auditing standards, we are to have sufficient, appropriate evidence to provide a reasonable basis for findings and conclusions before we can develop recommendations. Based on our work, we did not have sufficient evidence to provide a reasonable basis for making recommendations on the examples NOAA identified. We encourage NOAA to continue to work with Task Force member agencies and others to pursue areas they identify as needing additional work, such as identifying ways to take a more balanced approach across prevention and control activities. We believe that by implementing our recommendation, NOAA, as one of the co-chairs of the Task Force, would be in a better position to identify funding gaps and priorities, and determine recommendations for funding based on emerging needs. NOAA also provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Homeland Security, the Interior, State, and Transportation; the Administrator of the Environmental Protection Agency; and other interested parties. In addition, the report is available at no charge at the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix VII. This report examines (1) how much the Aquatic Nuisance Species Task Force (Task Force) member agencies expended addressing aquatic invasive species from fiscal year 2012 through 2014; (2) activities conducted by Task Force member agencies and challenges in addressing aquatic invasive species; and (3) the extent to which the Task Force has measured progress in achieving the goals of its 2013-2017 strategic plan. For all three objectives, we reviewed aquatic invasive species-related laws, including the Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990, as amended (the 1990 Act), regulations, and academic studies. We conducted interviews with, and obtained documentation from, the co-chairs of the Task Force and other Task Force representatives; officials from the 13 Task Force federal member departments and agencies (member agencies); and representatives from each of the Task Force’s six regional panels to learn about their roles and responsibilities, aquatic invasive species-related activities, and any expenditure information they maintain related to those activities. In addition, we interviewed staff from the National Invasive Species Council to learn about their efforts to collect information on federal expenditures for invasive species activities. To determine how much Task Force member agencies expended addressing aquatic invasive species for fiscal years 2012 through 2014 and obtain information on activities conducted, we developed and disseminated a questionnaire to the 13 Task Force member agencies, requesting information on their estimated expenditures and activities conducted to address aquatic invasive species. Specifically, the questionnaire requested member agencies to provide estimates of their expenditures for the activities they conducted in each of the following seven aquatic invasive species activity categories: (1) prevention, (2) early detection and rapid response, (3) control and management, (4) research, (5) restoration, (6) education and public awareness, (7) and leadership and international cooperation. These were the same activity categories used by the National Invasive Species Council to collect and report information for its annual invasive species interagency “crosscut” budget summary. The council’s annual budget summary includes estimates of federal agency expenditures and planned funding on activities to address all types of invasive species, but it does not include a breakdown of expenditure by type, including expenditures specific to aquatic invasive species. Therefore, the council’s annual budget summary provided a framework for us to follow in developing our questionnaire, but we could not use information from the budget summary to obtain or report information on federal expenditures specific to aquatic invasive species. Several Task Force member agency officials recommended that we follow the council’s framework for our questionnaire since many of the member agencies provide information to the council, and they suggested that following a similar framework would facilitate their ability to respond to our request. In developing our questionnaire, we worked with staff from the National Invasive Species Council and conducted pretests with three member agencies to obtain their comments, which were incorporated as appropriate. In our questionnaire, we requested that each member agency provide (1) its estimated expenditures for fiscal years 2012 through 2014 (the most recent years for which member agencies reported reliable data were available), (2) examples of aquatic invasive species activities conducted during this time period, and (3) its planned activities and estimated expenditures for future years, which we defined as fiscal years 2015 and 2016. We also included questions about how the Task Force member agencies prepared their estimates, their sources of information, any challenges or limitations in preparing the estimates, and whether the estimates were reviewed by their budget or financial offices. Appendix IV provides a blank copy of our questionnaire. We received completed responses from all 13 of the Task Force member agencies. The member agencies provided information on their activities conducted to address aquatic invasive species, but member agencies varied in the level of detail they provided about their estimated expenditures. Twelve of the 13 member agencies included at least some information on their estimated expenditures for fiscal years 2012 through 2014, but the U.S. Forest Service reported that it was unable to provide estimates. For the other 12 agencies, they varied in their ability to provide consistent and complete information on their estimated expenditures at the level of detail we requested in our questionnaire. With respect to the expenditure information for fiscal years 2012 to 2014, some agencies were able to provide estimates of their expenditures by activity category, but many reported that they could not provide estimates at this level of detail. For example, the Environmental Protection Agency reported its expenditures supported activities for five of the seven activity categories, but because it could not provide separate estimates for each of these categories it reported all of its expenditures under the prevention category. Similarly, the National Park Service reported conducting activities in all seven activity categories in fiscal years 2012 and 2013, but provided estimates for two activity categories (research and restoration) and reported that it was unable to determine how much of its estimated expenditures went toward the other five activity categories in these years. Based on inconsistencies and incomplete responses across the 13 member agencies, we decided to limit our reporting for fiscal years 2012 through 2014 to estimates of total annual expenditures for each Task Force member agency. With respect to future expenditures for fiscal years 2015 to 2016, a few member agencies indicated they did not have estimates of expenditures for future years, though others had partial estimates. To avoid reporting potentially misleading information that could underestimate likely future expenditures compared to amounts reported for fiscal years 2012 through 2014, we decided not to report the future expenditure estimates provided to us. Similarly, 9 of the 13 member agencies reported that they were not able to provide estimates for how much they expended addressing specific aquatic invasive species, citing reasons such as expenditures being tracked at a project level rather than by a specific species. Therefore, we do not include species-specific expenditure information in our report. After receiving completed questionnaires, we followed up with Task Force member agency officials to obtain clarification or additional information, as needed. We did not independently verify the accuracy of the estimated expenditures reported by the member agencies, which likely include some over- and some under-estimates. For example, in its response, the U.S. Fish and Wildlife Service (FWS) described various activities that were implemented through projects supported with grant funding from the Wildlife Sport Fish Restoration Program. But, FWS did not include expenditure estimates for these project activities because it could not reliably estimate how much of the grant funding should be attributed to the aquatic invasive species component of the grant-funded projects. We asked each of the Task Force member agencies for their assessment of whether their estimated expenditures for fiscal years 2012 to 2014 were an underestimate, overestimate, or about right. Ten of the member agencies responded that their estimates were “about right,” and two indicated they were underestimates (one member agency did not provide estimates). Accordingly, the expenditures reflect the agencies’ best estimates of how much they expended on aquatic invasive species activities during these years. Based on our assessment of these responses, along with the responses provided through the questionnaire, we determined that the expenditure estimates for fiscal years 2012 through 2014 were sufficiently reliable for purposes of this report—to provide general estimates of total annual expenditures by Task Force member agencies on activities to address aquatic invasive species. To describe the activities conducted by Task Force member agencies and any challenges in addressing aquatic invasive species, we built on the information gathered through our questionnaire and conducted a series of interviews with officials from the 13 member agencies, the federal ex- officio member of the Task Force (the Smithsonian Environmental Research Center), and each of the Task Force’s six regional panels. Through these interviews, we collected information and documentation on aquatic invasive species activities conducted and any challenges agencies identified in addressing aquatic invasive species. Many of the activities and challenges relate to ongoing activities that span multiple fiscal years and thus the information we collected often highlights, but is not limited to, fiscal years 2012 through 2014. We also conducted site visits in Southern Florida, Northern California, and Western Washington to interview local federal officials and observe activities at the sites, such as inspections of shipments of live fish to search for aquatic invasive species and research being conducted at research facilities. We selected these locations based on the number and variety of aquatic invasive species and federal agencies, as well as the types of activities conducted in those locations. Information we obtained from our interviews and site visits on activities conducted and challenges identified are not generalizable, but we believe the examples we obtained provide important insights into the wide array of aquatic invasive species activities being undertaken across the 13 Task Force member agencies and the challenges agencies face in conducting those activities. To determine the extent to which the Task Force has measured progress in achieving the goals of its 2013-2017 strategic plan, we conducted interviews with and obtained documentation from Task Force representatives, officials from the 13 Task Force member agencies, and officials representing the six regional panels. We reviewed the Task Force’s 2013-2017 strategic plan, its 2012 reporting matrix, and other documentation related to the Task Force’s efforts to collect information related to its strategic plan. We then analyzed and compared this information to program requirements identified in the 1990 Act, our previous reports on leading practices provided by the GPRA Modernization Act of 2010, and our executive guide on strategic planning, as appropriate. We conducted this performance audit from November 2014 to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Map of the United States with Examples of Aquatic Invasive Species and Their Reported Presence by State, and Common Pathways (Corresponds to fig. 1) Figure 3 shows examples of aquatic invasive species and their known locations (i.e., reported presence of a species) as well as common pathways of invasion (see interactive fig. 1) and includes the figure’s rollover information. Table 5 provides descriptions of the aquatic invasive species used as examples, and table 6 provides descriptions of common pathways of invasion. Through our questionnaire to the 13 federal member agencies of the Aquatic Nuisance Species Task Force (Task Force), we requested that member agencies identify the types of aquatic invasive species activities they conducted during fiscal years 2012 through 2014, including how those activities fell within the seven general activity categories developed by the National Invasive Species Council. The Task Force member agency responses are summarized in table 7. Anne-Marie Fennell, (202) 512-3841 or [email protected]. In addition to the individual named above, Alyssa M. Hundrup (Assistant Director), Natalie Block, Mark Braza, Greg Campbell, Virginia Chanley, Armetha Liles, Michael Meleady, Kelly Rubin, Jeanette Soares, Anne Stevens, Sara Sullivan, Kiki Theodoropoulos, and Tama Weinberg made key contributions to this report. | Aquatic invasive species—harmful, nonnative plants, animals, and microorganisms living in aquatic habitats—damage ecosystems or threaten commercial, agricultural, and recreational activities. The Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990 created the Task Force and required it to develop an aquatic nuisance (which GAO refers to as invasive) species program. The Water Resources Reform and Development Act of 2014 includes a provision that GAO assess federal costs of, and spending on, aquatic invasive species. This report examines (1) how much Task Force member agencies expended addressing aquatic invasive species for fiscal years 2012-2014; (2) activities conducted by Task Force member agencies and challenges in addressing aquatic invasive species; and (3) the extent to which the Task Force has measured progress in achieving the goals of its 2013-2017 strategic plan. GAO sent a questionnaire to member agencies to obtain expenditures for fiscal years 2012-2014; interviewed member agency officials; and analyzed laws and strategic planning documents. The 13 federal member agencies of the Aquatic Nuisance Species Task Force (Task Force) estimated expending an average of about $260 million annually for fiscal years 2012 through 2014 to address aquatic invasive species. However, several member agencies identified in their questionnaire responses challenges in developing their estimates. For example, some member agencies reported that their activities to address aquatic invasive species were often integrated into larger projects, making it difficult to isolate the portion of expenditures specific to aquatic invasive species out of total expenditures for the projects. As a result, expenditure information reported by GAO generally reflects member agencies' best estimates of total expenditures, rather than actual expenditures. Task Force member agencies conducted a wide range of activities and identified several challenges in addressing aquatic invasive species. Member agencies reported conducting activities across several activity categories, including taking actions to prevent introductions, control the spread of existing invaders, and research ecological impacts of aquatic invasive species. For instance, most conducted prevention activities—such as constructing a series of electric barriers to prevent the entry of Asian Carp from the Mississippi River Basin into the Great Lakes—recognizing that prevention activities may be the most cost-effective method of addressing aquatic invasive species. Additionally, officials from several member agencies expressed concern that their activities, though numerous, may not be adequate relative to the growing magnitude and impacts of aquatic invasive species amid decreasing or constrained agency resources. The Task Force—which is co-chaired by the U.S. Fish and Wildlife Service and National Oceanic and Atmospheric Administration (NOAA)—developed a 2013-2017 strategic plan to guide its member agencies but has not taken key steps to measure progress in achieving the goals laid out in its strategic plan. As called for in its strategic plan, the Task Force in 2012 planned to develop an operational plan to track and measure aquatic invasive species activities and progress. However, the Task Force did not develop an operational plan because of constrained funding and limited resources, according to Task Force representatives. The Task Force also did not meet several of the 1990 Act's requirements including describing its members' roles and activities and reporting annually to Congress on the program's progress. The representatives agreed that a mechanism to track activities and measure progress is important and said they plan to discuss the possibility of doing so at their November 2015 meeting. Task Force representatives, however, had not established a time frame or specifics for their approach. Developing and regularly using a tracking mechanism could help the Task Force measure progress in achieving its strategic goals, as well as help the Task Force meet the 1990 Act's requirements to describe its members' roles and specific activities and to report annually to Congress on the program's progress. GAO recommends that the Task Force develop a mechanism to measure progress toward its strategic goals and help meet certain statutory requirements. Most member agencies generally concurred or had no comments, but NOAA disagreed. GAO believes its recommendation is valid as discussed further in this report. |
ACF’s Children’s Bureau administers and oversees federal funding to states for child welfare services under Titles IV-B and IV-E of the Social Security Act, and states and counties provide these child welfare services, either directly or indirectly through contracts with private agencies. Among other activities, ACF staff are responsible for developing appropriate policies and procedures for states to follow to obtain and use federal child welfare funds, reviewing states’ planning documents required by Title IV-B, conducting states’ data system reviews, assessing states’ use of Title IV-E funds, and providing technical assistance to states through all phases of the CFSR process. In addition, ACF staff coordinate the work of the 10 resource centers to provide additional support and assistance to the states. Spurred by the passage of the 1997 Adoption and Safe Families Act (ASFA), ACF launched the CFSR in 2001 to improve its existing monitoring efforts, which had once been criticized for focusing exclusively on states’ compliance with regulations rather than on their performance over a full range of child welfare services. The CFSR process combines a statewide self-assessment, an on-site case file review that is coupled with stakeholder interviews, and the development and implementation of a 2- year PIP with performance benchmarks to measure progress in improving noted deficiencies. In assessing performance through the CFSR, ACF relies, in part, on its own data systems, known as NCANDS and AFCARS, which were designed prior to CFSR implementation to capture, report, and analyze the child welfare information collected by the states. Today, these systems provide the national data necessary for ACF to calculate national standards for key performance items against which all states are measured and to determine, in part, whether or not states are in substantial conformity on CFSR outcomes and systemic factors. Once ACF approves the PIP, states are required to submit quarterly progress reports. Pursuant to CFSR regulations, federal child welfare funds can be withheld if states do not show adequate PIP progress, but these penalties are suspended during the 2-year PIP implementation term. In preparation for the next round of CFSRs, ACF officials have formed a Consultation Work Group of ACF staff, child welfare administrators, data experts, and researchers who will propose recommendations on the CFSR measures and processes. The group’s resulting proposals for change, if any, are not yet available. ACF and many state officials perceive the CFSR as a valuable process— highlighting many areas needing improvement—and a substantial undertaking, but some state officials and child welfare experts told us that data enhancements could improve its reliability. ACF staff in 8 of the 10 regions considered the CFSR a helpful tool to improve outcomes for children. Further, 26 of the 36 states responding to a relevant question in our survey commented that they generally or completely agreed with the results of the final CFSR report, even though none of the 41 states with final CFSR reports released through 2003 has achieved substantial conformity on all 14 outcomes and systemic factors. In addition, both ACF and the states have dedicated substantial financial and staff resources to the process. However, several state officials and child welfare experts we interviewed questioned the accuracy of the data used to compile state profiles and establish the national standards. While ACF officials in the central office contend that stakeholder interviews and case reviews compliment the data profiles, many state officials and experts reported that additional data from the statewide assessment could bolster the evaluation of state performance. ACF and state officials support the objectives of the review, especially in focusing on children’s outcomes and strengthening relationships with stakeholders, and told us they perceive the process as valuable. For example, ACF officials from 8 regional offices noted that the CFSRs were more intensive and more comprehensive than the other types of reviews they had conducted in the past, creating a valuable tool for regional officials to monitor states’ performance. In addition, state officials from every state we visited told us that the CFSR process helped to improve collaboration with community stakeholders. Furthermore, state staff from 4 of the 5 states we visited told us the CFSR led to increased public and legislative attention to critical issues in child welfare. For example, caseworkers in Wyoming told us that without the CFSR they doubted whether their state agency’s administration would have focused on needed reforms. They added that the agency used the CFSR findings to request legislative support for the hiring of additional caseworkers. Along with the value associated with improved stakeholder relations, the ACF officials we talked to and many state officials reported that the process has been helpful in highlighting the outcomes and systemic factors, as well as other key performance items that need improvement. According to our survey, 26 of the 36 states that commented on the findings of the final CFSR report indicated that they generally or completely agreed with the findings, even though performance across the states was low in certain key outcomes and performance items. For example, not one of the 41 states with final reports released through 2003 was found to be in substantial conformity with either the outcome measure that assesses the permanency and stability of children’s living situations or with the outcome measure that assesses whether states had enhanced families’ capacity to provide for their children’s needs. Moreover, across all 14 outcomes and systemic factors, state performance ranged from achieving substantial conformity on as few as 2 outcomes and systemic factors to as many as 9. As figure 1 illustrates, the majority of states were determined to be in substantial conformity with half or fewer of the 14 outcomes and systemic factors assessed. States’ performance on the outcomes related to safety, permanency, and well-being—as well as the systemic factors—is determined by their performance on an array of items, such as establishing permanency goals, ensuring worker visits with parents and children, and providing accessible services to families. The CFSR showed that many states need improvement in the same areas. For example, across all 41 states reviewed through 2003, the 10 items most frequently rated as needing improvement included assessing the needs and services of children, parents, and foster parents (40 states); assessing the mental health of children (37 states); and establishing the most appropriate permanency goal for the child (36 states). Given the value that ACF and the states have assigned to the CFSR process, both have spent substantial financial resources and staff time to prepare for and implement the reviews. In fiscal years 2001-03, when most reviews were scheduled, ACF budgeted an additional $300,000 annually for CFSR-related travel. In fiscal year 2004, when fewer reviews were scheduled, ACF budgeted about $225,000. To further enhance its capacity to conduct the reviews, and to obtain additional logistical and technical assistance, ACF spent approximately $6.6 million annually to hire contractors. Specifically, ACF has let three contracts to assist with CFSR- related activities, including training reviewers to conduct the on-site reviews, tracking final reports and PIP documents, and, as of 2002, writing the CFSR final reports. Additionally, ACF hired 22 new staff to build central and regional office capacity and dedicated 4 full-time staff and 2 state government staff temporarily on assignment with ACF to assist with the CFSR process. To build a core group of staff with CFSR expertise, ACF created the National Review Team, composed of central and regional office staff with additional training in and experience with the review process. In addition, to provide more technical assistance to the states, ACF reordered the priorities of the national resource centers to focus their efforts primarily on helping states with the review process. Like ACF, states also spent financial resources on the review. While some states did not track CFSR expenses—such as staff salaries, training, or administrative costs—of the 25 states that reported such information in our survey, the median expense to date was $60,550, although states reported spending as little as $1,092 and as much as $1,000,000 on the CFSR process. Although ACF officials told us that states can use Title IV- E funds to pay for some of their CFSR expenses, only one state official addressed the use of these funds in our survey, commenting that it was not until after the on-site review occurred that the state learned these funds could have been used to offset states’ expenses. States also reported that they dedicated staff time to prepare for the statewide assessment and to conduct the on-site review, which sometimes had a negative impact on some staffs’ regular duties. According to our survey, 45 states reported dedicating up to 200 full-time staff equivalents (FTE), with an average of 47 FTEs, to the statewide assessment process. Similarly, 42 states responded that they dedicated between 3 and 130 FTEs, with an average of 45 FTEs, to the on-site review process. For some caseworkers, dedicating time to the CFSR meant that they were unable or limited in their ability to manage their typical workload. For example, Wyoming caseworkers whose case files were selected for the on-site review told us that they needed to be available to answer reviewers’ questions all day every day during the on-site review, which they said prevented them from conducting necessary child abuse investigations or home visits. Child welfare-related stakeholders—such as judges, lawyers, and foster parents—also contributed time to the CFSR. State officials in the 5 states we visited, as well as child welfare experts, reported on several data improvements that could enhance the reliability of CFSR findings. In particular, they highlighted inaccuracies with the AFCARS and NCANDS data that are used for establishing the national standards and creating the statewide data profiles, which are then used to determine if states are in substantial conformity. These concerns echoed the findings of a prior GAO study on the reliability of these data sources, which found that states are concerned that the national standards used in the CFSR are based on unreliable information and should not be used as a basis for comparison and potential financial penalty. Furthermore, many states needed to resubmit their statewide data after finding errors in the data profiles ACF would have used to measure compliance with the national standards. According to our national survey, of the 37 states that reported on resubmitting data for the statewide data profile, 23 needed to resubmit their statewide data at least once, with one state needing to resubmit as many as five times to accurately reflect revised data. Four states reported in our survey that they did not resubmit their data profiles because they did not know they had this option or they did not have enough time to resubmit before the review. In addition to expressing these data concerns, child welfare experts as well as officials in all of the states we visited commented that existing practices that benefit children might conflict with actions needed to attain the national standards. For example, officials in New York said that they recently implemented an initiative to facilitate adoptions. Because these efforts focus on the backlog of children who have been in foster care for several years, New York officials predict that their performance on the national standard for adoption will be lower since many of the children in the initiative have already been in care for more than 2 years. Experts and officials from multiple states also commented that they believe the on-site review case sample of 50 cases is too small to provide an accurate picture of statewide performance, although ACF officials stated that the case sampling is supplemented with additional information. For example, Oklahoma officials we visited commented that they felt the case sample size was too small, especially since they annually assess more than 800 of their own cases—using a procedure that models the federal CFSR—and obtain higher performance results than the state received on its CFSR. Furthermore, because not every case in the states’ sample is applicable to each item measured in the on-site review, we found that sometimes as few as 1 or 2 cases were being used to evaluate states’ performance on an item. For example, Wyoming had only 2 on-site review cases applicable for the item measuring the length of time to achieve a permanency goal of adoption, but for 1 of these cases, reviewers determined that appropriate and timely efforts had not been taken to achieve finalized adoptions within 24 months, resulting in the item being assigned a rating of area needing improvement. While ACF officials acknowledged the insufficiency of the sample size, they contend that the case sampling is augmented by stakeholder interviews for all items and applicable statewide data for the five CFSR items with corresponding national standards, therefore providing sufficient evidence for determining states’ conformity. All of the states we visited experienced discrepant findings between the aggregate data from the statewide assessment and the information obtained from the on-site review. We also found that in these 5 states, ACF had assigned an overall rating of area needing improvement for 10 of the 11 instances in which discrepancies occurred. ACF officials acknowledged the challenge of resolving data discrepancies, noting that such complications can delay the release of the final report and increase or decrease the number of items that states must address in their PIPs. While states have the opportunity to resolve discrepancies by submitting additional information explaining the discrepancy or by requesting an additional case review, only 1 state to date has decided to pursue the additional case review. Further, several state officials and experts also told us that additional data from the statewide assessments—or other data sources compiled by the states—could bolster the evaluation of states’ performance, but they found this information to be missing or insufficiently used in the final reports. For example, child welfare experts and state officials from California and New York—who are using alternative data sources to AFCARS and NCANDS, such as longitudinal data that track children’s placements over time—told us that the inclusion of this more detailed information would provide a more accurate picture of states’ performance nationwide. An HHS official told us that alternative data are used only to assess state performance in situations in which a state does not have NCANDS data, since states are not mandated to have these systems. Given their concerns with the data used in the review process, state officials in 4 of the 5 states believed that the threshold for achieving substantial conformity was difficult to achieve. While an ACF official told us that different thresholds for the national standards had been considered, ACF policy makers ultimately concluded that a threshold at the 75th percentile of the nationwide data would be used. ACF officials recognize that they have set a high standard. However, they believe it is attainable and supportive of their overall approach to move states to the standard through continuous improvement. Forty-one states are engaged in program improvement planning, but many uncertainties, such as those related to federal guidance and monitoring and the availability of state resources, have affected the development, implementation, and funding of the PIPs. State PIPs include strategies such as revising or developing policies, training caseworkers, and engaging stakeholders, and ACF has issued regulations and guidance to help states develop and implement their plans. Nevertheless, states reported uncertainty about how to develop their PIPs and commented on the challenges they faced during implementation. For example, officials from 2 of the states we visited told us that ACF had rejected their PIPs before final approval, even though these officials said that the plans were based on examples of approved PIPs that regional officials had provided. Further, at least 9 of the 25 states responding to a question in our survey on PIP implementation indicated that insufficient time, funding, and staff, as well as high caseloads, were the greatest challenges they faced. As states progress in PIP implementation, some ACF officials expressed a need for more guidance on how to monitor state accomplishments, and both ACF and state officials were uncertain about how the estimated financial penalties would be applied if states fail to achieve the goals described in their plans. State plans include a variety of strategies to address weaknesses identified in the CFSR review process. However, because most states had not completed PIP implementation by the time of our analysis, the extent to which states have improved outcomes for children has not been determined. While state PIPs varied in their detail, design, and scope, according to our analysis of 31 available PIPs, these state plans have focused to some extent on revising or developing policies; reviewing and reporting on agency performance; improving information systems; and engaging stakeholders such as courts, advocates, foster parents, private providers, or sister agencies in the public sector. Table 1 shows the number of states that included each of the six categories and subcategories of strategies we developed for the purposes of this study. Our analysis also showed that many states approached PIP development by building on state initiatives in place prior to the on-site review. Of the 42 surveyed states reporting in our survey on this topic, 30 said that their state identified strategies for the PIP by examining ongoing state initiatives. For example, local officials in New York City and state officials in California told us that state reform efforts—borne in part from legal settlements—have become the foundation for the PIP. State officials in California informed us that reform efforts initiated prior to the CFSR, such as implementing a new system for receiving and investigating reports of abuse and neglect and developing more early intervention programs, became integral elements in the PIP. ACF has provided states with regulations and guidance to facilitate PIP development, but some states believe the requirements have been unclear. For example, several states commented in our survey that multiple aspects of the PIP approval process were unclear, such as how much detail and specificity the agency expects the plan to include; what type of feedback states could expect to receive; when states could expect to receive such feedback; and whether a specific format was required. Officials in the states we visited echoed survey respondents’ concerns with officials from 3 of the 5 states informing us that ACF had given states different instructions regarding acceptable PIP format and content. For example, California and Florida officials told us that their program improvement plans had been rejected prior to final approval, even though they were based on examples of approved plans that regional officials had provided. In addition, California officials told us that they did not originally know how much detail the regional office expected in the PIP and believed that the level of detail the regional office staff ultimately required was too high. Specifically, officials in California said that the version of their plan that the region accepted included 2,932 action steps—a number these officials believe is too high given their state’s limited resources and the 2-year time frame to implement the PIP. ACF officials have undertaken several steps to clarify their expectations for states and to improve technical assistance. For example, in 2002, 2 years after ACF released the CFSR regulations and a procedures manual, ACF offered states additional guidance and provided a matrix format to help state officials prepare their plans. ACF officials told us the agency sends a team of staff from ACF and resource centers to the state to provide intensive on-site technical assistance, when it determines that a state is slow in developing its PIP. Further, ACF has sent resource center staff to states to provide training almost immediately after the completion of the on-site review to encourage state officials to begin PIP development before the final report is released. Our survey results indicate that increasing numbers of states are developing their PIPs early in the CFSR process, which may reflect ACF’s emphasis on PIP development. According to our analysis, of the 18 states reviewed in 2001, only 2 started developing their PIPs before or during the statewide assessment phase. Among states reviewed in 2003, this share increased to 5 of 9. Evidence suggests that lengthy time frames for PIP approval have not necessarily delayed PIP implementation, and ACF has made efforts to reduce the time the agency takes to approve states’ PIPs. For example, officials in 3 of the 5 states we visited told us they began implementing new action steps before ACF officially approved their plans because many of the actions in their PIPs were already under way. In addition, according to our survey, of the 28 states reporting on this topic, 24 reported that they had started implementing their PIP before ACF approved it. Further, our analysis shows that the length of time between the PIP due date, which statute sets at 90 days after the release of the final CFSR report, and final ACF PIP approval has ranged considerably—from 45 to 349 business days. For almost half of the plans, ACF’s approval occurred 91 to 179 business days after the PIP was due. Our analysis indicated that ACF has recently reduced the time lapse by 46 business days. This shorter time lapse for PIP approval may be due, in part, to the ACF’s emphasis on PIP development. According to one official, ACF has directed states to concentrate on submitting a plan that can be quickly approved. Another ACF official added that because of ACF’s assistance with PIP development, states are now submitting higher-quality PIPs that require fewer revisions. Program improvement planning has been ongoing, but uncertainties have made it difficult for states to implement their plans and ACF to monitor state performance. Such uncertainties include not knowing whether state resources are adequate to implement the plans and how best to monitor state reforms. In answering a survey question about PIP implementation challenges, a number of states identified insufficient funding, staff, and time—as well as high caseloads—as their greatest obstacles. Figure 2 depicts these results. One official from Pennsylvania commented that because of the state’s budget shortfall, no additional funds were available for the state to implement its improvement plan, so most counties must improve outcomes with little or no additional resources. A Massachusetts official reported that fiscal problems in his state likely would lead the state to lay off attorneys and caseworkers and to cut funding for family support programs. While state officials acknowledged that they do not have specific estimates of PIP implementation expenses because they have not tracked this information in their state financial systems, many states indicated that to cope with financial difficulties, they had to be creative and use resources more efficiently to fund PIP strategies. Of the 26 states responding to a question in our survey on PIP financing, 12 said that they were financing the PIP strategies by redistributing current funding, and 7 said that they were using no-cost methods. In an example of the latter, Oklahoma officials reported pursuing in-kind donations from a greeting card company so that they could send thank-you notes to foster parents, believing this could increase foster parent retention and engagement. Aside from funding challenges, states also reported that PIP implementation has been affected by staff workloads, but these comments were mixed. In Wyoming, for example, caseworkers told us that their high caseloads would prevent them from implementing many of the positive action steps included in their improvement plan. In contrast, Oklahoma caseworkers told us that the improvement plan priorities in their state— such as finding permanent homes for children—have helped them become more motivated, more organized, and more effective with time management. ACF officials expressed uncertainty about how best to monitor states’ progress and apply estimated financial penalties when progress was slow or absent, and 3 of the 5 states we visited reported frustration with the limited guidance ACF had provided on the PIPs quarterly reporting process. For example, 4 regional offices told us that they did not have enough guidance on or experience with evaluating state quarterly reports. Some regional offices told us they require states to submit evidence of each PIP action step’s completion, such as training curricula or revised policies, but one ACF official acknowledged that this is not yet standard procedure, although the agency is considering efforts to make the quarterly report submission procedures more uniform. Moreover, ACF staff from 1 region told us that because PIP monitoring varies by region, they were concerned about enforcing penalties. Shortly before California’s quarterly report was due, state officials told us they still did not know how much detail to provide; how to demonstrate whether they had completed certain activities; or what would happen if they did not reach the level of improvement specified in the plan. Based on data from the states that have been reviewed to date, the estimated financial penalties range from a total of $91,492 for North Dakota to $18,244,430 for California, but the impact of these potential penalties remains unclear. While ACF staff from most regional offices told us that potential financial penalties are not the driving force behind state reform efforts, some contend that the estimated penalties affect how aggressively states pursue reform in their PIPs. For example, regional office staff noted that 1 state’s separate strategic plan included more aggressive action steps than those in its PIP because the state did not want to be liable for penalties if it did not meet its benchmarks for improvement. State officials also had mixed responses as to how the financial penalties would affect PIP implementation. An official in Wyoming said that incurring the penalties was equivalent to shutting down social service operations in 1 local office for a month, while other officials in the same state thought it would cost more to implement PIP strategies than it would to incur financial penalties if benchmarks were unmet. Nevertheless, these officials also said that while penalties are a consideration, they have used the CFSR as an opportunity to provide better services. One official in another state agreed that it would cost more to implement the PIP than to face financial penalties, but this official was emphatic in the state’s commitment to program improvement. To implement the CFSRs, ACF has focused its activities almost entirely on the CFSR review process, and regional staff report limitations in providing assistance to states in helping them to meet key federal goals. ACF officials told us the CFSR has become the agency’s primary mechanism for monitoring states and facilitating program improvement, but they acknowledged that regional office staff might not have realized the full utility of the CFSR as a tool to integrate all existing training and technical assistance efforts. Further, according to ACF officials, meetings to discuss a new system of training and technical assistance are ongoing, though recommendations were not available at the time of publication of our April 2004 report. Levels of resource center funding, the scope and objectives of the resource centers’ work, and the contractors who operate the resource centers are all subject to change before the current cooperative agreements expire at the close of fiscal year 2004. ACF officials told us that the learning opportunities in the Children’s Bureau are intentionally targeted at the CFSR, but staff in 3 regions told us that this training should cover a wider range of subjects—including topics outside of the CFSR process—so that regional officials could better meet states’ needs. All 18 of the courses that ACF has provided to its staff since 2001 have focused on such topics as writing final CFSR reports and using data for program improvement, and while ACF officials in the central office said that the course selection reflects both the agency’s prioritization of the CFSR process and staff needs, our interviews with regional staff suggest that some of them wish to obtain additional non- CFSR training. In addition, although ACF organizes biennial conferences for state and federal child welfare officials, staff from 5 regions told us that they wanted more substantive interaction with their ACF colleagues, such as networking at conferences, to increase their overall child welfare expertise. Further, staff from 6 of the 10 regions told us that their participation in conferences is limited because of funding constraints. ACF staff in all 10 regions provide ongoing assistance or ad hoc counseling to states, either through phone, e-mail, or on-site support, but staff from 6 regions told us they would like to conduct site visits with states more regularly to improve their relationships with state officials and provide more targeted assistance. Further, staff in 4 regions felt their travel funds were constrained and explained that they try to stretch their travel dollars by addressing states’ non-CFSR needs, such as court improvements, during CFSR-related visits. While an ACF senior official from the central office confirmed that CFSR-related travel constituted 60 percent of its 2002 child welfare-monitoring budget, this official added that CFSR spending represents an infusion of funding rather than a reprioritization of existing dollars, and stated that regional administrators have discretion over how the funds are allocated within their regions. In addition, the same official stated that he knew of no instance in which a region requested more money for travel than it received. Concerns from state officials in all 5 of the states we visited echoed those of regional office staff and confirmed the need for improvements to the overall training and technical assistance structure. For example, state officials in New York and Wyoming commented that ACF staff from their respective regional offices did not have sufficient time to spend with them on CFSR matters because regional staff were simultaneously occupied conducting reviews in other states. However, our survey results revealed that states reviewed in 2003 had much higher levels of satisfaction with regional office assistance than those states reviewed in 2001, which suggests improvements to regional office training and technical assistance as the process evolved. ACF and the states have devoted considerable resources to the CFSR process, but to date, no state has passed the threshold for substantial conformity on all CFSR measures, and concerns remain regarding the validity of some data sources and the limited use of all available information to determine substantial conformity. The majority of states surveyed agreed that CFSR results are similar to their own evaluation of areas needing improvement. However, without using more reliable data— and in some cases, additional data from state self-assessments—to determine substantial conformity, ACF may be over- or under-estimating the extent to which states are actually meeting the needs of the children and families in their care. These over- or under-estimates can, in turn, affect the scope and content of the PIPs that states must develop in response. In addition, the PIP development, approval, and monitoring processes remain unclear to some, potentially reducing states’ credibility with their stakeholders and straining the federal/state partnership. Similarly, regional officials are unclear as to how they can accomplish their various training and technical assistance responsibilities, including the CFSR. Without clear guidance on how to systematically prepare and monitor PIP-related documents, and how regional officials can integrate their many oversight responsibilities, ACF has left state officials unsure of how their progress over time will be judged and potentially complicated its own monitoring efforts. To ensure that ACF uses the best available data in measuring state performance, we recommended in our April 2004 report that the Secretary of HHS expand the use of additional data states may provide in their statewide assessments and consider alternative data sources when available, such as longitudinal data that track children’s placements over time, before making final CFSR determinations. In addition, to ensure that ACF regional offices and states fully understand the PIP development, approval, and monitoring processes, and that regional offices fully understand ACF’s prioritization of the CFSR as the primary mechanism for child welfare oversight, we recommended that the Secretary of HHS issue clarifying guidance on the PIP process and evaluate states’ and regional offices’ adherence to this instruction and provide guidance to regional offices explaining how to better integrate the many training and technical assistance activities for which they are responsible, such as participation in state planning meetings and the provision of counsel to states on various topics, with their new CFSR responsibilities. In response to the first recommendation, HHS acknowledged that the CFSR is a new process that continues to evolve, and also noted several steps it has taken to address the data quality concerns we raise in our report. We believe that our findings from the April 2004 report, as well as a previous report on child welfare data and states’ information systems, fully address HHS’s initial actions, as well as the substantial resources the agency has already dedicated to the review process. However, to improve its oversight of state performance, our recommendation was meant to encourage HHS to take additional actions to improve its use of data in conducting these reviews. In response to the second recommendation, HHS said that it has continued to provide technical assistance and training to states and regional offices, when appropriate. HHS noted that it is committed to continually assessing and addressing training and technical assistance needs. In this context, our recommendation was intended to encourage HHS to enhance existing training efforts and focus both on state and on regional officials’ understanding of how to incorporate the CFSR process into their overall improvement and oversight efforts. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For further contacts regarding this testimony, please call Cornelia M. Ashby at (202) 512-8403. Individuals making key contributions to this testimony include Diana Pietrowiak and Joy Gambino. D.C. Family Court: Operations and Case Management Have Improved, but Critical Issues Remain. GAO-04-685T. Washington, D.C.: April 23, 2004. Child and Family Services Reviews: Better Use of Data and Improved Guidance Could Enhance HHS’s Oversight of State Performance. GAO- 04-333 Washington, D.C.: April 20, 2004. Child Welfare: Improved Federal Oversight Could Assist States in Overcoming Key Challenges. GAO-04-418T. Washington, D.C.: January 28, 2004. D.C. Family Court: Progress Has Been Made in Implementing Its Transition. GAO-04-234. Washington, D.C.: January 6, 2004. Child Welfare: States Face Challenges in Developing Information Systems and Reporting Reliable Child Welfare Data. GAO-04-267T. Washington, D.C.: November 19, 2003. Child Welfare: Enhanced Federal Oversight of Title IV-B Could Provide States Additional Information to Improve Services. GAO-03-956. Washington, D.C.: September 12, 2003. Child Welfare: Most States Are Developing Statewide Information Systems, but the Reliability of Child Welfare Data Could be Improved. GAO-03-809. Washington, D.C.: July 31, 2003. D.C. Child and Family Services: Key Issues Affecting the Management of Its Foster Care Cases. GAO-03-758T. Washington, D.C.: May 16, 2003. Child Welfare and Juvenile Justice: Federal Agencies Could Play a Stronger Role in Helping States Reduce the Number of Children Placed Solely to Obtain Mental Health Services. GAO-03-397. Washington, D.C.: April 21, 2003. Foster Care: States Focusing on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-03-626T. Washington, D.C.: April 8, 2003. Child Welfare: HHS Could Play a Greater Role in Helping Child Welfare Agencies Recruit and Retain Staff. GAO-03-357. Washington, D.C.: March 31, 2003. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well- Being. GAO-01-191. Washington, D.C.: December 29, 2000. Child Welfare: New Financing and Service Strategies Hold Promise, but Effects Unknown. GAO/T-HEHS-00-158. Washington, D.C.: July 20, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. Foster Care: HHS Could Better Facilitate the Interjurisdictional Adoption Process. GAO/HEHS-00-12. Washington, D.C.: November 19, 1999. Foster Care: Effectiveness of Independent Living Services Unknown. GAO/HEHS-00-13. Washington, D.C.: November 10, 1999. Foster Care: Kinship Care Quality and Permanency Issues. GAO/HEHS- 99-32. Washington, D.C.: May 6, 1999. Juvenile Courts: Reforms Aim to Better Serve Maltreated Children. GAO/HEHS-99-13. Washington, D.C.: January 11, 1999. Child Welfare: Early Experiences Implementing a Managed Care Approach. GAO/HEHS-99-8. Washington, D.C.: October 21, 1998. Foster Care: Agencies Face Challenges Securing Stable Homes for Children of Substance Abusers. GAO/HEHS-98-182. Washington, D.C.: September 30, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 2001, the Department of Health and Human Services' (HHS) Administration for Children and Families (ACF) implemented the Child and Family Services Reviews (CFSR) to increase states' accountability. The CFSR uses states' data profiles and statewide assessments, as well as interviews and an on-site case review, to measure state performance on 14 outcomes and systemic factors, including child well-being and the provision of caseworker training. The CFSR also requires progress on a program improvement plan (PIP); otherwise ACF may apply financial penalties. This testimony is based on our April 2004 report and addresses (1) ACF's and the states' experiences preparing for and conducting the statewide assessments and on-site reviews; (2) ACF's and the states' experiences developing, funding, and implementing items in PIPs; and (3) any additional efforts that ACF has taken beyond the CFSR to improve state performance. For the April 2004 report, we surveyed all 50 states, the District of Columbia, and Puerto Rico regarding their experiences throughout the CFSR process, visited 5 states to obtain first-hand information, and conducted a content analysis of all 31 available PIPs as of January 1, 2004. We also interviewed HHS officials--including those in all 10 regional offices--and key child welfare experts. ACF and many state officials perceive the CFSR as a valuable process and a substantial undertaking, but some data enhancements could improve its reliability. ACF staff in 8 of the 10 regions considered the CFSR a helpful tool to improve outcomes for children. Further, 26 of 36 states responding to a relevant question in our survey commented that they generally or completely agreed with the results of the final CFSR report, even though none of the 41 states with final CFSR reports released through 2003 has achieved substantial conformity on all 14 outcomes and systemic factors. Additionally, both ACF and the states have dedicated substantial financial and staff resources to the process. Nevertheless, several state officials and child welfare experts we interviewed questioned the accuracy of the data used in the review process. While ACF officials contend that stakeholder interviews and case reviews complement the data profiles, many state officials and experts reported that additional data from the statewide assessment could bolster the evaluation of state performance. Program improvement planning is under way, but uncertainties have affected the development, funding, and implementation of state PIPs. Officials from 3 of the 5 states we visited said ACF's PIP-related instructions were unclear, and at least 9 states reported in our survey that challenges to implementing their plans include insufficient funding, staff, and time. While ACF has provided some guidance, ACF and state officials remain uncertain about PIP monitoring efforts and how ACF will apply financial penalties if states fail to achieve their stated PIP objectives. Since 2001, ACF's focus has been almost exclusively on the CFSRs and regional staff report limitations in providing assistance to states in helping them to meet key federal goals. While staff from half of ACF's regions told us they would like to provide more targeted assistance to states, and state officials in all 5 of the states we visited said that ACF's existing technical assistance efforts could be improved, ACF officials acknowledged that regional staff might still be adjusting to the new way ACF oversees child welfare programs. In the April 2004 report, we recommended that the Secretary of HHS ensure that ACF uses the best available data to measure state performance. We also recommended that the Secretary clarify PIP guidance and provide guidance to regional officials on how to better integrate their many oversight responsibilities. In commenting on a draft of the April 2004 report, HHS acknowledged that the CFSR is a new process that continues to evolve, and noted several steps it has taken to address the data quality concerns we raised in that report. |
The BMDS is designed to counter ballistic missiles of all ranges—short, medium, intermediate, and intercontinental. Short-range ballistic missiles have a range of less than 621 miles; medium-range ballistic missiles have a range from 621 to1,864 miles; intermediate-range ballistic missiles have a range from 1,864 to 3,418 miles; and intercontinental ballistic missiles have a range greater than 3,418 miles. Since ballistic missiles have different ranges, speeds, sizes, and performance characteristics, MDA is developing a variety of systems that, when integrated, provide multiple opportunities to destroy ballistic missiles in flight. The BMDS includes space-based sensors, ground- and sea-based radars, ground- and sea- based interceptor missiles, and a command and control system that provides communication links to the sensors and interceptor missiles. Once a ballistic missile has been launched, these sensors and interceptors are coordinated to track or engage the threat missile during its flight. DOD develops its major defense acquisition systems through an acquisition process in which programs move through significant phases in their life-cycle. DOD programs have a materiel solution analysis phase during which DOD analyzes and recommends materiel solutions for the identified need; a technology development phase, during which DOD reduces technology risk and determines the appropriate set of technologies to be integrated into the full system; a product development phase, formally known as engineering and manufacturing development, which represents program initiation, and during which the program focuses on integrating the system design, developing system capability, and demonstrating the manufacturing processes; a production and deployment phase for the purpose of achieving an operational capability that satisfies the mission need; and an operations and support phase, where DOD works to sustain the system in the most cost-effective manner. When MDA was established in 2002, the Secretary of Defense granted it exceptional flexibility to set requirements and manage the acquisition of the BMDS in order to meet a presidential directive to deliver an initial defensive capability against ballistic missiles in 2004. This decision postponed application of DOD acquisition policy for BMDS elements until they were mature enough to begin production and deployment. Because BMDS’s entrance into DOD’s acquisition cycle is deferred, MDA is exempt from certain laws and policies triggered by the phases of the acquisition life-cycle that generally require major defense acquisition programs to take steps such as the following: Prior to beginning the technology development phase and product development phase, conduct an analysis of alternatives to compare potential solutions and determine the most cost-effective weapon system to acquire. Before the program begins the product development phase, document key program performance, cost, and schedule goals in a baseline that has been approved by a higher-level DOD official. The baseline is considered the program’s initial business case—evidence that the concept of the program can be developed and produced within existing resources. The baseline provides decision makers with the program’s total cost for an increment of work, average unit costs for systems to be delivered, key dates associated with a capability, and the weapon’s intended performance parameters. Once a baseline has been approved, measure the program against the approved baseline or obtain the approval of a higher-level acquisition executive before making changes. Once a baseline has been approved, report certain increases in unit cost measured from the original and the current program baseline. Unit cost is the cost divided by the quantity produced. Prior to beginning the product development and/or production and deployment phases of the DOD acquisition cycle, obtain an independent life-cycle cost estimate. While these flexibilities give MDA latitude to manage the BMDS and enable it to rapidly develop and field new systems, we have previously reported that the agency has used these flexibilities to employ acquisition strategies with high levels of concurrency (that is, overlapping activities such as testing and production) and they have also hampered oversight and accountability. Congress and DOD have taken steps to address concerns over MDA’s acquisition management strategy, accountability, and oversight. Although MDA is not yet required to establish an acquisition program baseline pursuant to 10 U.S.C. § 2435 and related DOD policy because of the acquisition flexibilities it has been granted, Congress has enacted legislation requiring MDA to establish some baselines. MDA reported baselines for several BMDS programs to Congress for the first time in its June 2010 BMDS Accountability Report to respond to statutory requirements in the National Defense Authorization Act for Fiscal Year 2008. Most recently, the National Defense Authorization Act for Fiscal Year 2012 required MDA to establish and maintain baselines for program elements or major portions of such program elements. The act specified information to be included in the baselines such as total quantities and quantities by fiscal year, and required an annual report of these baselines to Congress. In 2010, MDA created a new review process in which the agency identified five phases of acquisition as seen in table 1. The agency has documented the key knowledge that is needed prior to the technology development, product development, initial production, and production phases. For example, as part of the process, MDA requires a program to identify alternatives to meet the mission’s needs before it can proceed to MDA’s technology development phase. MDA officials have stated in the past that they expect that aligning the development efforts with the phases will help to ensure that they obtain the appropriate level of knowledge before allowing the acquisitions to move from one phase to the next. One of the most significant new thrusts in BMDS acquisitions is the development and deployment of systems to aid in the defense of Europe and to augment the current protection of the United States. In September 2009, the president announced a new approach called the European Phased Adaptive Approach, which is structured around Aegis ship and Aegis Ashore systems in addition to other various BMDS sensors. The BMDS in Europe is planned to be deployed over time as the systems become more mature. The final phase of U.S. missile defense in Europe is planned to enhance the limited defense of the United States against intercontinental ballistic missiles currently provided by the U.S. based Ground-based Midcourse Defense (GMD) system. Towards the end of our audit work, in March 2013, the Secretary of Defense made an announcement that canceled the final phase of U.S. missile defense in Europe that had planned to use Aegis BMD SM-3 Block IIB interceptors, and announced several other plans including deploying additional ground based interceptors in Fort Greely, Alaska, and deploying a second AN/TPY-2 radar in Japan. Because this announcement occurred late in our audit, we were not able to assess the effects and incorporate this information into our report. The DOD 2010 Ballistic Missile Defense Review stated that other regional missile defenses are to be developed, each tailored to a specific region of the world and its particular threats and circumstances. The BMDS in Europe is the first such approach to missile defense to be developed. We reported in January 2011 that DOD was planning for additional regional defenses in East Asia and the Middle East. Table 2 describes BMDS elements discussed in this report, the defensive capabilities each currently provides or plans to provide for a particular mission, and their current MDA acquisition phase. Figure 1 depicts the BMDS elements that could be used to engage a threat missile during the course of its flight. An engagement scenario using the Aegis BMD element, for example, could occur as follows: After the launch of a threat missile, the Space Based Infrared System, an Air Force system of satellites that detect ballistic missile launches, detects the launch and sends a cue to the Command, Control, Battle Management, and Communications system. The Command, Control, Battle Management, and Communications system tells one or more Army Navy/Transportable Radar Surveillance and Control Model 2 radars to track the threat missile. The radars provide track information to the Command, Control, Battle Management, and Communications system which develops system track data to support Aegis BMD engagements. Relying on data provided by the Army Navy/Transportable Radar Surveillance and Control Model 2 radars and its own SPY-1 radar, the Aegis BMD ship uses SM-3 missiles to intercept and attempt to destroy the threat. A key challenge DOD and MDA’s new Director face is ensuring that the Department is getting the best value for its missile defense investments, particularly as MDA faces growing fiscal pressure as it develops new programs while supporting and upgrading its existing systems. We have frequently reported on the importance of establishing a sound basis before committing resources to developing a new product. We have also reported that part of a sound basis is a full analysis of alternatives (AOA). An AOA also helps ensure that key DOD and congressional decision makers understand why the chosen system was selected in order to prioritize limited investment dollars to achieve a balanced BMDS portfolio. Because of MDA’s acquisition flexibilities, its programs are not required to complete an AOA. While MDA has performed some limited analyses that consider alternatives, it has not conducted a robust AOA for its new programs. We have reported that without AOAs, programs may not select the best solution for the warfighter, are at risk for cost increases, and can face schedule delays. However, some progress was made in January 2013 when Congress directed DOD to conduct a comprehensive assessment of PTSS alternatives. An AOA can help establish a sound basis for an acquisition by comparing potential solutions and determining the most promising and cost-effective weapon system to acquire. As such, major defense acquisition programs are generally required by law and DOD’s acquisition policy to conduct an AOA before they are approved to enter the technology development phase. A robust AOA can provide decision makers with the information they need by helping establish a sound basis that is used to assess whether a concept can be developed and produced within existing resources and if it is the best solution to meet the warfighter’s needs. It accomplishes this by providing a foundation for developing and refining the program’s requirements, and giving insight into the technical feasibility and costs of alternatives. Specifically, an AOA should address key questions, such as the following: Did an AOA occur at the appropriate time? What alternatives meet the warfighter’s needs? Are the alternatives operationally suitable and effective? Can the alternatives be supported? What are the programmatic (e.g., cost or schedule), technical, and operational risks for each alternative? What are the development, production, deployment, and support costs for each alternative? How do the alternatives compare to one another? In addition, as we reported in September 2009 and again in September 2012, AOAs should be completed early enough in the acquisition cycle, prior to the start of technology development, to provide time for adjustments to requirements before those requirements are finalized. Because of the flexibilities that have been granted to MDA, its programs are not required to complete an AOA before starting technology development. Nevertheless, MDA’s acquisition directive requires programs to show they have identified competitive alternative materiel solutions before they can proceed to MDA’s technology development phase. However, this directive provides no specific guidance on how this alternatives analysis should be conducted or what criteria should be used to identify and assess alternatives, such as risks and costs. According to DOD, the office of the Director for Cost Assessment and Program Evaluation develops and approves study guidance for AOAs for other major defense acquisition programs. MDA could look to that office for support should it decide to undertake more robust analyses of alternatives. While MDA has conducted some analyses that consider alternatives, it has not conducted robust AOAs for its new programs—the Aegis BMD SM-3 Block IIB and PTSS programs. We recently reported the SM-3 Block IIB program did not conduct an AOA prior to beginning technology development. While the program assessed some alternatives that could potentially achieve early intercept, it did not include other key aspects of an AOA, such as considering a broad range of alternatives and performing a cost- effectiveness assessment of the concepts considered. Recent MDA technical analysis has led to changes in the initial program assumptions about how to use the SM-3 Block IIB and suggests additional development and investment by the program will be needed to defend the United States. Further, potential missile configurations that are under consideration may provide increased capability for the SM-3 Block IIB but also pose significant cost and safety risks. To some extent, these program issues may have been driven by the early decision to narrow solutions without the benefit of an AOA. Although the PTSS program has conducted a number of studies in the past, none can be considered a robust AOA because they either assessed too narrow a range of alternatives or did not fully assess program and technical risks. Congress included a requirement in the National Defense Authorization Act for Fiscal Year 2013 for DOD to evaluate PTSS alternatives partially in response to concerns raised by the National Academy of Sciences last year about the costs and benefits of the PTSS program. DOD’s Cost Assessment and Program Evaluation office is currently in the process of conducting a comprehensive review of PTSS that may include many aspects of an AOA, but it is unclear at this point if it will be thorough enough to determine the best concept. By not conducting robust AOAs, these programs are at risk for developing weapon systems that may not be the best solution to meet the warfighter’s needs and having cost, schedule, and technical problems. It also means that key DOD and congressional decision makers may have a limited understanding of the reason these systems were selected. In the past few years, MDA has had declining budgets, some program cancellations, and curtailment of other programs partially because of affordability concerns. Looking forward, MDA faces important decisions about how it will balance and prioritize its portfolio of BMDS investments as it increasingly develops new programs while supporting and upgrading existing deployed systems. We have previously reported that successful organizations follow a disciplined process to assess alternatives to help them achieve a balanced portfolio that spreads risk across products, aligns with strategic goals and objectives, and maximizes return on investment. To this end, AOAs help decision-makers prioritize limited investment dollars by assessing operational benefits against technical and affordability challenges of individual systems before committing resources in order to achieve a balanced portfolio that meets strategic goals within available resources. AOAs are therefore a key first step in establishing a sound basis for acquisitions. MDA’s annual budget peaked in fiscal year 2007 at $9.4 billion but has since trended downwards to a requested $7.8 billion in fiscal year 2013. Since fiscal year 2009, DOD canceled three programs because of technical issues, schedule delays, and concerns about the cost- effectiveness or operational role of the programs. In fiscal year 2009, DOD terminated the Kinetic Energy Interceptors program, which was developing a high velocity booster rocket designed to intercept missiles in the boost and middle phases of flight, and the Multiple Kill Vehicle program, which was developing a way to place multiple kill vehicles on an interceptor. DOD terminated these programs after spending approximately $2.5 billion on their development. In addition, in fiscal year 2012, DOD canceled the Airborne Laser program, which placed a high- energy chemical laser onboard an airplane designed to intercept missiles, after spending over $5 billion on its development. To improve acquisition outcomes and achieve strategic goals for the United States and regional missile defense, MDA faces continuing portfolio challenges during this period of continuing fiscal pressure. DOD already curtailed several existing BMDS programs in fiscal year 2012 because of affordability concerns. For example, after approximately $2 billion had been spent in several years of development, the SBX sea- based radar was downgraded from operational status to a limited test status because of funding limitations. Despite demand for THAAD batteries from military commands, MDA reduced the number of such purchases from nine to six to meet budget constraints. Partially as a result, procurement of the AN/TPY-2, a ground-based radar component of the THAAD battery as well as a stand-alone forward-based sensor, was also reduced from 18 to 11. Balancing its portfolio of investments going forward will be a challenge as MDA plans to develop a number of new systems, such as PTSS and multiple versions of advanced interceptors for the Aegis BMD program, during the next few years while at the same time beginning full production for several new weapon systems, such as Aegis Ashore and the Aegis BMD SM-3 Block IB missile. In addition, it will continue to fund full operation and support costs for the GMD element. MDA also plans to share some of those costs with the services for other elements that are already being produced, such as the AN/TPY-2 radar and THAAD. AOAs could play a constructive role as MDA manages its portfolio of acquisitions. MDA gained important knowledge through its test program and took some positive steps to reduce acquisition risks for two of its programs. MDA increased its understanding of BMDS performance after successfully conducting its most complex integrated air and missile defense flight test to date as well as other important tests for the THAAD and Aegis BMD SM-3 Block IB programs. MDA also reduced the acquisition risk for two programs by delaying commitments to development until after the programs could demonstrate that the technologies and resources available are aligned with requirements. However, the Director of MDA faces continuing challenges addressing issues that stem from previous premature production commitments and minimizing further use of high risk acquisition strategies. We reported in March 2009 that MDA was pursuing a concurrent development, manufacturing, and fielding strategy in which assets are produced and fielded before they are fully demonstrated through testing and modeling. We have previously reported that committing to production and fielding before development is complete is a high risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. Moreover, best practices of successful organizations include a knowledge-based process in which each successive knowledge point builds on the preceding one, giving decision makers the knowledge they need, when they need it, to make decisions about whether to invest significant funds to move forward. High levels of acquisition risk continue to be present in many of the elements’ acquisition strategies. For example, as we reported last year, MDA’s production problems were magnified by high levels of overlap—or concurrency—between product development and production. Although the stated rationale for this overlap is to introduce systems in a timelier manner and to maintain an efficient industrial development and production workforce, MDA’s Aegis BMD, GMD, and THAAD interceptor production have been significantly disrupted during the past few years due to this concurrency, delaying planned deliveries to the warfighter, raising costs, and disrupting the industrial base. Program plans for the Aegis Ashore and PTSS also include high acquisition risks due to planned premature commitments to production. In addition, we reported in April 2012 that risk reduction flight tests are conducted the first time a system is tested in order to confirm that it works before adding other test objectives and that MDA’s flight test program had been disrupted by the lack of those risk reduction flight tests. Looking forward, the risks for an upcoming complex test involving multiple MDA systems are elevated because MDA is planning to use a new type of target for the first time in this critical operational test. MDA conducted the largest integrated air and missile defense flight test to date, achieving near simultaneous intercepts of multiple targets by various BMDS interceptors. Flight Test Integrated-01, conducted October 2012, was a combined developmental and operational flight test that for the first time utilized warfighters from multiple combatant commands and employed multiple missile defense systems including THAAD, Aegis BMD, and the Patriot Advanced Capability-3. All five targets, three of which were ballistic missiles and two of which were cruise missiles, were launched and performed as expected during this test. This is a significant achievement because, as we have reported in the past, troubles with target performance in prior years have hindered MDA’s ability to conduct flight testing and achieve planned objectives. In addition, during this test, THAAD achieved its objectives by intercepting a medium-range target for the first time and an Aegis ship conducted another successful standard missile-2 engagement against a cruise missile. The SM-3 Block IA failed to intercept its target during the BMD portion of the event. This test also provided valuable data to evaluate interoperability and integration between THAAD, Aegis BMD, Patriot Advanced Capability-3, C2BMC, and various sensors during a live engagement. In May and June 2012, the Aegis BMD program successfully completed intercepts using the new SM-3 Block IB missile which demonstrated increased capability for some of the system’s components. In May 2012, the program intercepted a short-range target with its Block IB missile for the first time. The test demonstrated, among other things, the missile’s improved capability to track and identify objects in space. In June 2012, the Aegis BMD SM-3 Block IB program completed another successful intercept test. During this test the missile intercepted a separating target and provided more insight into the missile’s enhanced ability to discriminate the target from other objects during an engagement. THAAD successfully conducted its first operational flight test in October 2011 before entering full-rate production. This was also the first time Army and DOD test and evaluation organizations were involved to confirm that the test and the test results were representative of the fielded system. During the test, the THAAD system fired two interceptors and successfully—and nearly simultaneously—intercepted two short-range targets. The test demonstrated THAAD’s ability to perform under operationally realistic conditions (within the constraints of test range safety), from initial stages of mission planning through the completion of the engagement. Additionally, this test incorporated fixes to a required safety device and supported the resumption of interceptor manufacturing. The Army also used this test as support for accepting the first two THAAD batteries for use by the warfighter. MDA has taken steps to reduce acquisition risk by decreasing the overlap between technology development and product development for two of its programs—the Aegis BMD SM-3 Block IIA and the SM-3 Block IIB programs. Reconciling gaps between requirements and available resources before product development begins makes it more likely that a program will meet cost, schedule, and performance targets. The Aegis BMD SM-3 Block IIA program added time and money to the program to extend development. Following significant technology development problems with four components, MDA delayed the system preliminary design review—during which a program demonstrates that the technologies and resources available are aligned with requirements—for more than 1 year, thereby reducing its acquisition risk. As a result, in March 2012, following additional development of the four components, the program was able to successfully complete the review. The Aegis BMD SM-3 Block IIB program responded to our April 2012 recommendation to reduce acquisition concurrency by delaying the start of product development until after its preliminary design review was complete. By delaying the start of product development, the program increased the amount of technical knowledge it plans to achieve prior to committing to development. Additionally, the program is leveraging competition among contractors during the technology development phase, which we reported in April 2012 increases technical innovation. Program management officials stated they have already seen benefits from this competition. For example, they stated they have a better understanding of the program’s progress, performance possibilities for the missile, and risks associated with those possibilities. Despite significant cost and schedule disruptions resulting from elevated acquisition risks in the Aegis BMD SM-3 Block IB, GMD, and THAAD programs, MDA continues to follow high risk acquisition strategies for its Aegis Ashore, PTSS, and Targets and Countermeasures programs. We reported in April 2012 that the Aegis BMD SM-3 Block IB, GMD, and THAAD programs discovered problems during developmental testing— and after production had begun—which delayed planned deliveries to the warfighter, increased costs, and affected MDA’s supplier base. In addition, for the Aegis BMD SM-3 Block IB and GMD programs, these issues also affected the performance of delivered missiles and created pressure to keep producing to avoid work stoppages even when problems were discovered in testing. In fiscal year 2012, the SM-3 Block IB and GMD programs continued to work on the issues that disrupted their production, but the THAAD program was able to overcome most of its issues. The Aegis Ashore and PTSS programs are also undertaking high risk acquisition strategies that include premature commitments to production that could result in schedule delays, cost increases, and performance shortfalls. Additionally, the Targets and Countermeasures acquisition strategy is adding risk to an upcoming major operational flight test because it is planning to use undemonstrated targets in this complex and costly test involving multiple MDA systems. In 2012, the Aegis BMD SM-3 Block IB was able to partially overcome the production and testing issues exacerbated by its concurrent development and production strategy. MDA prematurely began purchasing SM-3 Block IB missiles beyond the number needed for developmental testing in 2010. In 2011, developmental issues arose when the program experienced a failure in its first developmental flight test and an anomaly in a separate SM-3 Block IA flight test, in a component common with the SM-3 Block IB. As a result, production was disrupted when MDA slowed production of the SM-3 Block IB interceptors and reduced planned quantities from 46 to 14. In 2012, the program was able to successfully conduct two flight tests which allowed the program to address some of the production issues by demonstrating a fix made to address one of the 2011 flight test issues. However, development issues continue to delay the program’s fiscal year 2012 schedule and production. For example, MDA experienced further difficulties completing testing of a new maneuvering component— contributing to delays for a third flight test needed to validate the SM-3 Block IB capability and also subsequently delaying a production decision for certain components from December 2012 to February 2013. In order to avoid further disruptions to the production line, the program plans to award the next production contract for some missile components needed for the next order of 29 SM-3 Block IB missiles in February 2013—before the third flight test can verify the most recent software modifications. The program then plans to award the contract to complete this order upon conducting a successful flight test planned for the third quarter of fiscal year 2013.The program is at risk for costly retrofits, additional delays and further production disruptions if issues are discovered during this flight test. The GMD program continues to have production delays and cost increases intensified by its concurrent development and production strategy. In order to meet a presidential directive to field a limited capability to defend the United States, MDA simultaneously developed, produced and fielded the GMD system. In 2004, the agency fielded five GMD interceptors configured with the program’s initial kill vehicle design referred to as the Capability Enhancement-I (CE-I) prior to completing development and testing. Although MDA had not yet fully completed development or demonstrated the full capability of these initial interceptors, in 2004 it committed to another highly concurrent acquisition strategy to develop, produce, and field additional interceptors with an upgraded kill vehicle known as the Capability Enhancement-II (CE-II). MDA proceeded to concurrently develop, manufacture and deliver 12 of these interceptors before halting manufacturing and delivery of interceptors due to a second flight test failure in December 2010. To address the causes of the failure, the program redesigned a component in the kill vehicle’s guidance system and is also planning to implement some changes to the firmware associated with it. MDA planned to conduct two flight tests in 2012 to demonstrate the new design and resume manufacturing the interceptors. While the program was unable to conduct either test as planned, MDA conducted the first resolution test in January 2013, a non-intercept test, known as Control Test Vehicle-01. While initial indications are that all components worked as intended, at the time of this review, analysis was ongoing. We reported in April 2012 that the discovery of the design problem while production was already under way increased MDA costs to demonstrate and fix CE-II capability from approximately $236 million to over $1.2 billion. This cost increase was due to the added costs of additional flight tests including the costs of the target and test-range, investigating the failure, developing failure resolutions, and fixing the already delivered CE- II missiles. Costs continue to grow because MDA has had to further delay the next CE-II intercept test originally planned for fiscal year 2012. Moreover, at the time of this review, the next CE-II intercept test date is yet to be determined as MDA is considering various options, including adding another flight test. As we reported in April 2012, problems encountered while THAAD was concurrently designing and producing interceptors led to slower delivery rates of interceptors for the first and second THAAD batteries. During fiscal year 2011 after several years’ delay, 11 of the expected 50 operational interceptors were delivered. In fiscal year 2012, after a 15- month delay and increased costs, the program was able to deliver the remainder of the interceptors needed for the first two batteries after completing necessary testing of a safety device. The Aegis Ashore program, as we reported in April 2012, initiated product development and established cost, schedule, and performance baselines prior to completing the preliminary design review. Further, we reported that this sequencing increased technical risks and the possibility of cost growth by committing to product development with less technical knowledge than recommended by acquisition best practices and without ensuring that requirements were defined, feasible, and achievable within cost and schedule constraints. In addition, the program committed to buy components necessary for manufacturing prior to conducting flight tests to confirm the system worked as intended. As a result, any design modifications identified through testing would need to be retrofitted to produced items at additional cost. However, the MDA Director stated in March 2012 that the Aegis Ashore development is low risk because of its similarity to the sea-based Aegis BMD. Nonetheless, this concurrent acquisition plan means that knowledge gained from flight tests cannot be used to guide the construction of Aegis Ashore installations or the procurement of components for operational use. The PTSS program approved its third acquisition strategy in October 2012, and continues to include several important aspects of sound acquisition practices, such as competition and short development time frames. However, it also contains overlap between development and production. The PTSS program plans to finalize the satellite design, select a manufacturer, and commit to producing components for the next two operational satellites—all while a laboratory team develops and manufactures the first two satellites. This approach will not enable decision makers to fully benefit from the knowledge about the design to be gained from on-orbit testing of the laboratory-built satellites before committing to the next industry-built satellites. Also, these first four satellites will be operational satellites, forming part of the operational nine satellite constellation until they are replaced between 2025 and 2027. As a result, if on-orbit testing reveals the need for hardware changes, the program may face cost increases to implement changes, and the operational constellation may face performance shortfalls as it will not fully benefit from those changes until the initial four satellites are replaced. MDA’s first use of a new target in its upcoming major operational flight test is adding risk to that test. This flight test, called Flight Test Operational-01, is planned to be one of the most complex tests MDA has attempted. This test will demonstrate the ability of multiple BMDS elements to defeat a raid of up to five near-simultaneous regional threats including two new air-launched extended medium-range ballistic missile targets, a short-range ballistic missile target, and two cruise missiles. The risk of this test is higher than it would otherwise be because MDA is using newly designed medium-range targets for the first time instead of first demonstrating them in a less complex and expensive scenario. Using these new targets puts this major test at risk of not being able to obtain key information should the targets not perform as expected. Developmental issues with this new medium-range target as well as identification of new software requirements have already contributed to delaying the test, which was originally planned for the fourth quarter of fiscal year 2012 and is now planned for the fourth quarter of fiscal year 2013. While MDA made substantial improvements to the clarity of its reported cost and schedule baselines in fiscal year 2012, the information underlying these baselines is not yet sufficiently reliable. In addition, MDA’s estimates are not comprehensive because they do not include costs from military services in reported life cycle costs for its programs. Instability in the form of MDA’s frequent adjustments to its acquisition baselines makes assessing progress over time extremely difficult and, in many cases, impossible. Since we began annual reporting on missile defense in 2004, we have made a number of recommendations—and Congress has passed a number of laws—directing MDA to establish baselines for the expected cost, schedule, and performance of the BMDS and report deviations from the baseline as the programs progress. These recommendations and laws have offered a number of approaches to provide necessary information while preserving the MDA Director’s acquisition flexibility. However, despite some positive steps forward since 2004, issues remain that limit the ability to meaningfully assess BMDS cost and schedule progress. Most major defense acquisition programs are required to establish baselines prior to beginning product development. These baselines, as implemented by DOD, include key performance, cost, and schedule goals. Decision makers can compare the current estimates for performance, cost, and schedule goals against a baseline in order to measure and monitor progress. Identifying and reporting deviations from the baseline in cost, schedule, or performance as a program proceeds provides valuable information for oversight by identifying areas of program risk and its causes to decision makers. Baselines also help ensure that the full financial commitment is considered before embarking on major development efforts. MDA, in response to statutory requirements, reported detailed baselines for several BMDS program elements, or portions of those program elements, for the first time in its June 2010 BMDS Accountability Report (BAR). These baselines are not like the baselines reported for other major defense acquisition programs. MDA established resource, schedule, test, operational capacity, technical, and contract baselines. They were established for BMDS elements that, according to MDA, have entered product development but are not yet mature enough to enter the formal DOD acquisition cycle for full-rate production and deployment. MDA’s baselines reported in the BAR are updated annually. Only the resource and schedule baselines have measureable goals and separately report and explain when the current program cost and schedule estimates have deviated to a certain extent from the baseline set in the prior year’s BAR. For that reason, we focus our assessment on these two baselines. The baselines reported in the 2012 BAR are for BMDS elements or major portions of those elements. For example, a major portion of an element may include an individual software version of the C2BMC element or an initial capability for GMD homeland defense. The 2012 BAR resource and schedule baselines we reviewed are Aegis BMD SM-3 Block IB with second generation weapon system Aegis BMD modernized weapon system software; Aegis Ashore; AN/TPY-2 increment 1—enables multiple radars to be managed and provides improved track accuracy, among other improvements; GMD initial homeland defense for a fundamental capability against intermediate- and long-range ballistic missile threats; THAAD 1.0 for a fundamental capability against short- and medium- range ballistic missiles; and Targets and Countermeasures intermediate-, medium-, and short- range ballistic missiles. MDA’s 2012 resource baselines report costs for all the categories of the life-cycle—research and development, procurement, military construction, operations and support, and disposal costs. The 2012 BAR also reports unit costs, which are usually reported in two ways: (1) average procurement unit cost—the average cost to produce one (2) program acquisition unit cost—the average cost to develop and produce one unit. According to the 2012 BAR, MDA separately reported and explained unit costs that increased by more than 5 percent from the prior year’s baseline. The schedule baseline includes key milestones and tasks, such as important decision points, significant increases in performance knowledge, modeling and simulation events, and development efforts. Some schedule baselines also show time frames for flight and ground tests, as well as for fielding and events to support fielding. According to the 2012 BAR, MDA also separately reported and explained events delayed three months or more from the prior year baseline. In its 2012 BAR, MDA made several useful changes to its reported resource and schedule baselines in response to our concerns and congressional direction. We reported in March 2011 that MDA’s schedule and resource baselines had several issues with clarity that limited their usefulness for oversight such as only reporting portions of life cycle costs. In that report, we recommended that MDA provide more detailed explanations and definitions of information included in the resource baselines, label cost estimates to reflect the content reported and explain any exclusions, and include all sunk costs in all of its cost estimates and baselines. MDA concurred with two of these recommendations but stated that it did not intend to include sunk costs into its unit costs for Targets and Countermeasures because, based on the extensive reuse of previous missile components in the targets program, including all sunk costs would not reflect MDA program costs accurately. Congress, in the National Defense Authorization Act for Fiscal Year 2012, added more detailed requirements for the contents of MDA’s acquisition baselines. MDA addressed many issues affecting the clarity, consistency, and completeness of information reported in its BAR baselines by reporting the full range of life cycle costs borne by MDA in the 2012 BAR resource baselines; defining more clearly what costs are presented in the resource baselines and also noting and explaining when costs were excluded from the estimates; and including costs already incurred in the unit cost for Targets and Countermeasures so they were more complete. In its 2012 BAR, MDA also addressed issues with its schedule baseline identified in our March 2011 report. For example, we found the BAR lacked a comprehensive list of planned deliveries and did not report major changes in planned dates for deliveries. Further, we recommended that the Secretary of Defense should ensure that MDA, as part of its acquisition baseline, include (1) a comprehensive list of actual versus planned quantities of assets that are or were to be delivered each fiscal year and (2) a report on variances of these quantities by fiscal year and the reasons for these differences. As a new addition to its 2012 BAR, MDA addressed this first recommendation by adding a separate delivery table that provides more detailed information on deliveries and inventories. However, we are not yet able to assess significant changes to all of the planned delivery dates reported in the 2012 BAR because this was the first year that the information was reported in this format. To provide further insight into its reported baselines, MDA also added a list of significant decisions made or events that occurred in the past year—either internal or external to the program—that affected program progress or baseline reporting. The agency also explained how these decisions or events affected each program. For example, DOD reduced AN/TPY-2 radar quantities which shortened the time to complete radar deliveries. These changes are reflected in the schedule baseline and the increase in unit costs. Understanding the effect of these decisions and events provides a valuable source of information for understanding why current estimates for unit costs or scheduled activities may differ from those reported either in the original or prior year’s baseline. While MDA has made some progress improving the clarity of its baseline reports, the agency has not yet addressed the underlying reliability issues with the cost estimates and schedules used to develop these baselines. One of the issues with the reliability of these estimates is that they are not comprehensive because they do not include costs from military services in reported life cycle costs for its programs. Until MDA’s baselines are based on reliable information and are comprehensive, they will not be useful for decision makers to understand progress. Although MDA has plans in place, it has made little progress improving the quality of its cost estimates that support its resource baseline since we made a recommendation to improve these estimates in our March 2011 report. In that report, we assessed MDA’s life cycle cost estimates using the GAO Cost Estimating and Assessment Guide. This guide is based on best practices in cost estimating and identifies key criteria for establishing high quality cost estimates. Our review found that the estimates we assessed were not comprehensive, lacked documentation, were not completely accurate, or were not sufficiently credible. For example, the MDA documentation lacked sufficient evidence to be considered a high-quality cost estimate. In June 2012, MDA completed an internal Cost Estimating Handbook, largely based on our guide, which, if implemented, could help address nearly all the shortfalls we identified in 2011. According to MDA’s Director of Operations, the agency is also assembling an independent cost group to carry out the processes outlined in its handbook. Because the handbook was only recently completed, it is too early to assess whether the quality of MDA’s cost estimates has improved. According to our guide, completing and documenting an independent cost assessment is a key criteria for establishing reliable cost estimates. While DOD major defense acquisition programs must obtain an independent cost estimate before advancing through certain major milestones, MDA has been exempted from these requirements. Nevertheless, DOD has conducted independent cost estimates for early versions of the Aegis BMD program, and for portions of the Space Tracking and Surveillance System, GMD, and THAAD programs. In addition, the Office of the Director for Cost Assessment and Program Evaluation is currently completing an independent cost estimate for PTSS that is planned to be released in the spring of 2013. According to officials from the Office of the Director for Cost Assessment and Program Evaluation, assessments have also been completed for the Aegis BMD program elements as part of a cost estimate for U.S. missile defense in Europe that has not yet been released. Once these estimates are released, we will review the Cost Assessment and Program Evaluation Office’s findings related to them. Independent cost estimates for additional MDA elements will further improve the credibility of MDA’s estimates. In addition, according to our guide, the cost estimate should be comprehensive. Comprehensive estimates include both government and contractor costs of the program over its full life cycle, from inception of the program through design, development, deployment, and operation and support to retirement. The agency made improvements to its resource baselines to include all of the life cycle costs funded by MDA from development through retirement of the program. However, the baselines do not include the operation and support costs funded by the individual military services. MDA officials told us in 2011 that they do not consider military service operation and sustainment funds to be part of a baseline because the services—not MDA—execute the funds. We recognize that the services execute these funds; however, they are part of the program’s life cycle costs. It is unclear what percentage these costs are in the case of MDA elements because these estimates have not been reported, however for other programs outside of MDA they can be significant. By not including military service costs, the life cycle costs for some MDA programs could be significantly understated. Similarly, in our July 2012 report, we used our Schedule Assessment Guide to assess five MDA program element schedules that support the baselines. We reported that none fully met the best practices identified in the guide. Some schedules had major deficiencies. While our analysis of these five programs cannot be generalized to apply to all MDA programs, these results are nevertheless significant because a reliable schedule is one key factor that indicates a program is likely to achieve its planned outcomes. The Department of Defense concurred with our recommendations and MDA programs have taken some actions to improve their schedules. However, MDA has not yet had time to fully address our recommendations. We plan to continue to monitor their progress because establishing sound and reliable schedules is fundamental to creating realistic schedule and cost baselines. In order for baselines to be useful for managing and overseeing a program, they need to be stable over time so progress can be measured and so that decision makers can determine how best to allocate limited resources. However, MDA only reports annual progress by comparing its current estimates for unit cost and scheduled activities against the prior year’s estimates. As a result, MDA’s baseline reports are not useful for tracking longer term progress. In contrast, DOD reports longer term progress for its other major defense acquisition programs. When we sought to make a longer-term comparison of the latest 2012 unit cost and schedule estimates against the original baselines set in 2010, we found that such a calculation could not be made in many instances because the content of the baselines had been adjusted from year to year in such a way that the baselines were no longer comparable. For example, a substantial amount of new program activities and costs were added to the reported baseline or work activities and costs were moved out of the cost or schedule baseline and placed into other baselines. In addition, there were instances where calculating a one-year change provided no insight into program progress because of these baseline adjustments. Specifics follow on Aegis Ashore, GMD, and Targets and Countermeasures. As we reported in April 2012, the instability of content in the Aegis Ashore program’s resource baseline obscures our assessment of the program’s progress. MDA prematurely set the baseline before program requirements were understood and before the acquisition strategy was firm. The program established its baseline for product development for the Romania and Hawaii facilities in June 2010 with a total cost estimate of $813 million. However 3 days later, when the program submitted this baseline to Congress in the 2010 BAR, it increased the total cost estimate by 19 percent, to $966 million. Since that time, the program has added a significant amount of content to the resource baseline to respond to acquisition strategy changes and requirements that were added after the baseline was set. Because of these adjustments, from the time the total estimated cost for Aegis Ashore in Romania and Hawaii was first approved in June 2010 at $813 million, it has nearly doubled to its estimate of $1.6 billion reported in the February 2012 BAR. These major adjustments in program content made it impossible to understand annual or longer-term program progress. These adjustments also affected the schedule baseline for Aegis Ashore. For example, many new activities were added to the baseline in 2012. In addition, comparing the estimated dates for scheduled activities listed in the 2012 BAR to the dates baselined in the 2010 BAR is impossible in some cases because activities from the 2010 BAR were split into multiple events, renamed, or eliminated all together in the 2012 BAR. MDA also redistributed planned activities from the Aegis Ashore schedule baselines into several other Aegis BMD schedule baselines. For example, activities related to software for Aegis Ashore were moved from the Aegis Ashore baseline and were split up and added to two other baselines for the second generation and modernized Aegis weapon systems software. Rearranging content made tracking the progress of these activities against the prior year and original baseline very difficult and in some cases impossible. As a result, appendix III contains a limited schedule assessment of near-term and long-term progress based on activities we were able to track in the BAR. GMD is moving activities and costs from a currently reported baseline to one that will be reported in the future, thereby obscuring cost growth. The GMD program’s current baseline represents activities and associated costs needed to achieve an initial defense of the United States. Although the program planned to report a new baseline in the 2013 BAR for its next set of capabilities, it has delayed reporting this baseline by at least one year. Despite significant technical problems, production disruptions and the addition of previously unplanned and costly work in its current efforts, the GMD total cost estimate as reported in the resource baseline has decreased from 2010 to 2012. We reported last year that GMD had a flight test failure in 2010 which revealed design problems, halted production, and increased costs to demonstrate the CE-II from $236 million to about $1.2 billion. This cost increase includes retrofit costs to already-delivered CE-II interceptors. Instead of increasing, the total costs reported in the BAR resource baseline have decreased because the program moved activities from out of its reported baseline. By moving these activities, MDA used the funds that were freed up for failure resolution efforts instead. In addition, because the baseline for its next set of capabilities will be defined after these activities have already been added to it, the additional cost for these activities will not be identifiable. The full extent of actual cost growth may never be determined or visible for decision makers for either baseline because of this adjustment. MDA removed activities and costs from its Targets and Countermeasures resource baselines, making it impossible to assess longer term progress. For example, costs for common target components, such as re-entry vehicles and associated objects, which were previously included in the baselines for medium-range and intermediate-range targets, were removed and redirected into a separate, newly created baseline for common components. In addition, the agency also changed the way it calculated its targets baselines by removing support costs and adding costs incurred in previous years. While the agency adjusted the accounting rules retroactively for the 2011 BAR to enable direct cost comparisons with the 2012 BAR, it is not possible to compare the 2012 BAR baselines with the original baselines set in the 2010 BAR for any of the targets. Developing and deploying new missile defense systems in Europe to aid in defense of Europe and the United States is a highly complex effort. We reported last year that several of the individual systems that comprise the current U.S. approach to missile defense in Europe—called the European Phased Adaptive Approach—have schedules that are highly concurrent. Concurrency entails proceeding into product development before technologies are mature or into production before a significant amount of independent testing has confirmed that the product works as intended. Such schedules can lead to premature purchases of systems that impair operational readiness and may result in problems that require extensive retrofits, redesigns, and cost increases. A key challenge, therefore, facing DOD is managing individual system acquisitions to keep them synchronized with the planned time frames of the overall U.S. missile defense capability planned in Europe. MDA still needs to deliver some of the capability planned for the first phase of the U.S. missile defense in Europe and is grappling with delays to some systems and/or capabilities planned in each of the next three major deployments. MDA also is challenged by the need to develop the tools, the models and simulations, to understand the capabilities and limitations of the individual systems before they are deployed. Because of technical limitations in the current approach to modeling missile defense performance, MDA recently chose to undertake a major new effort that it expects will overcome these limitations. However, MDA and the warfighters will not benefit from this new approach until at least half of the four planned phases have deployed. Towards the end of our audit work, in March 2013, the Secretary of Defense altered the plans for developing and deploying missile defense systems in Europe and the United States for the protection of the United States. Specifically, the announcement canceled Phase 4 which planned to use Aegis BMD SM-3 Block IIB interceptors, and announced several other plans, including deploying additional ground based interceptors in Fort Greely, Alaska, and deploying a second AN/TPY-2 radar in Japan. In April 2013, DOD proposed canceling the PTSS and Aegis BMD SM-3 Block IIB programs in the Fiscal Year 2014 President’s Budget Submission. Because the proposed cancellations occurred in the last few weeks of our audit, we were not able to assess the effects and incorporate this information into our report. U.S. missile defense in Europe is a four-phase effort that relies on increasingly capable missiles, sensors, and command and control systems to defend Europe and the United States. The presidential announcement in September 2009 associated each phase with a specific time frame as shown in figure 2. The first phase became operational in December 2011 and provides defense of Europe against short- and some medium-range ballistic missiles. MDA identified both the systems and the capabilities that the systems should have to enable defense of Europe against these threats. For example, C2BMC is needed and should be able to transmit data at a certain rate to an Aegis BMD ship during an engagement. The second phase plans a more robust defense against short- and medium-range ballistic missiles with the development of SM-3 Block IB missiles, and upgraded Aegis Weapons System software both at sea on Aegis BMD ships and on land at an Aegis Ashore site in Romania. The third phase is intended to add defense against intermediate -range ballistic missiles using the SM-3 Block IIA and an Aegis Ashore site in Poland. The fourth phase is expected to add an additional layer for defense of the United States against some intercontinental ballistic missiles using the SM-3 Block IIB as well as expand regional defense. As we reported in December 2010, the U.S. missile defense approach in Europe commits MDA to delivering systems and associated capabilities on a schedule that requires concurrency among technology, design, testing, and other development activities. We reported in April 2012 that deployment dates were a key factor in the elevated levels of schedule concurrency for several programs. We also reported at that time that concurrent acquisition strategies can affect the operational readiness of our forces and risk delays and cost increases. DOD declared Phase 1 operational in December 2011, but the systems delivered do not yet provide the full capability planned for the phase. MDA deployed, and the warfighter accepted, Phase 1 with the delivery of an AN/TPY-2 radar, an Aegis BMD ship with SM-3 Block IA missiles, an upgrade to C2BMC, and the existing space-based sensors. Given the limited time between the September 2009 announcement of the U.S. missile defense in Europe and the planned deployment of the first phase in 2011, that first phase was largely defined by existing systems that could be quickly deployed. MDA planned to deploy the first phase in two stages—the systems described above by December 2011 and upgrades to those systems in 2014. Although the agency originally planned to deliver the remaining capabilities of the first phase in 2014, an MDA official told us that MDA now considers these capabilities to be part of the second phase and these capabilities may not be available until 2015. In addition, independent organizations determined that some of the capabilities that were delivered did not work as intended. For example, the Director, Operational Test and Evaluation reported that there were some interoperability and command and control deficiencies. This organization also reported that MDA is currently investigating these deficiencies. According to MDA documentation, systems and associated capabilities for the next phases are facing delays, either in development or in integration and testing. For Phase 2, some capabilities, such as an Aegis weapon system software upgrade, may not be available. MDA officials stated they are working to resolve this issue. For Phase 3, some battle management and Aegis capabilities are currently projected to be delayed and the initial launch of a planned satellite sensor system—PTSS—is delayed. For Phase 4, deployment of the SM-3 Block IIB missile is delayed from 2020 to 2022, and full operational capability of PTSS is delayed to no sooner than 2023. A key challenge for both the Director of MDA and the warfighter is understanding the capabilities and the limitations of the systems MDA is deploying before they are deployed, particularly given the rapid pace of development. A critical step in this effort is to have the tools—which are the models and simulations—to perform these integrated and complex assessments. According to MDA’s Fiscal Year 2012 President’s Budget Submission, models and simulations are critical to understanding BMDS operational performance because assessing performance through flight tests alone is prohibitively expensive and can be affected by safety and test range constraints. Models and simulations, on the other hand, can be much less costly and are inherently not subject to the same safety and test range constraints. However, we have previously reported that MDA has struggled to develop these tools. In August 2009, U.S. Strategic Command and the BMDS Operational Test Agency jointly informed MDA of a number of system-level limitations in MDA’s modeling and simulation program that adversely affected their ability to assess BMDS performance. Since that time, we have reported that MDA has had difficulty developing its models and simulations to the point where it can assess operational performance. Over the past few years, the agency adopted different approaches to try to resolve issues with its modeling and simulation. MDA continues to have difficulty credibly assessing operational performance using models and simulations. MDA declared the first phase of U.S. missile defense in Europe operational in December 2011, but did so without the benefit of all planned supporting data because of problems with a key modeling and simulation event. MDA officials and officials from the Operational Test Agency determined that there were too many issues with the models and simulations in the event for it to be useful for determining operational effectiveness for the planned configuration. More broadly, in their independent 2012 assessments, both the Director, Operational Test and Evaluation and the BMDS Operational Test Agency reported a lack of confidence in MDA’s ability to completely and credibly model BMDS performance using existing models. Once a model or simulation is deemed credible, it can be used to explore the various operational conditions and reveal both the capabilities and limitations of the actual system. Without a full understanding of the capabilities and limitations of the first phase of U.S. missile defense in Europe, it is difficult for the warfighter and MDA to understand how the system will work in a real event or to develop solutions to problems that may arise with the systems and capabilities that have been delivered. MDA recently committed to a new approach in its modeling and simulation program that officials stated could put them on a path to credibly model individual programs and system-level BMDS performance by 2017. To accomplish this, MDA is replacing the two existing simulation frameworks used for ground testing and performance assessments with one framework. By using one framework, the agency anticipates data quality improvements through consistent representations of the threat, the environment, and communications at the system level. Without implementing these changes, MDA officials told us their ability to credibly model BMDS performance by the 2017 time frame, in time to assess the third phase of U.S. missile defense in Europe, is not possible. MDA program officials told us that the next major assessment of U.S. missile defense in Europe for the 2015 deployment will continue to have many of the existing shortfalls. As a result, MDA is pursuing initiatives to improve confidence in the realism of its models in the near term. One of the agency’s new initiatives involves identifying more areas in the models where credibility can be certified by the BMDS Operational Test Agency. A second initiative is focused on resolving the limitations identified jointly by the Operational Test Agency and U.S. Strategic Command. Lastly, MDA officials told us they are refining the process used to digitally recreate system-level flight tests in order to increase confidence in the models. The new MDA Director faces long-standing acquisition management challenges that hamper the agency’s ability to make wise investment choices, to develop and deliver cutting edge, integrated technologies within budget and time constraints, and to meet the President’s goals for U.S. missile defense in Europe. At the same time, for over a decade, MDA has provided Congress with very limited insight into cost and schedule growth for individual elements. While baseline reporting is more complete and comprehensive, the fact remains there is no way to track cost and schedule growth over time using those baselines. This makes it difficult for Congress to hold MDA accountable and to consider the wisdom of continuing high risk efforts. Since its inception, MDA has been operating in an environment of working under tight time frames for delivering capabilities—first with a presidential directive in 2002 and then with a presidential announcement in 2009 on U.S. missile defense in Europe. Although pressure remains to develop and field systems to meet set time frames and increased threats, we have also reached a critical juncture in our nation’s ability to afford spending limited funds to fix problems created by a high risk acquisition strategy. GAO has made recommendations over the years aimed at addressing many of these challenges. We have noted several in this report that have not yet been acted on. As the new MDA Director works to address the challenges we have identified, fully implementing two prior recommendations in particular could prove beneficial. First, our 2009 recommendation to reconsider the testing and validation schedules of ballistic missile defense systems and ensure they are synchronized with the development, manufacturing and fielding schedules so that items are not manufactured for fielding before their performance has been validated through testing could help reduce the risk of production disruptions. Second, our 2012 recommendation to make adjustments to acquisition schedules to reduce concurrency could help reduce the acquisition risks in the U.S. missile defense in Europe. Going forward, as Congress and DOD decide in which new missile defense programs to invest, they may lack a full understanding of the cost, technical feasibility, and operational requirements for those proposed new programs. Performing a robust analysis of alternatives, while not required of MDA, could be a proactive and beneficial step to laying a sound basis for determining which systems to pursue. Similarly, as MDA delivers increasingly complex missile defense systems, it is critical that it successfully conduct upcoming complex operational flight tests and gather necessary performance data. Reducing the risks tied to the first use of new types of targets in less critical tests before they are used in a major test could help put these programs on a better path to succeed. Finally, until MDA’s baselines have comprehensive cost information and are stabilized, the progress of MDA’s individual acquisitions cannot be assessed. In order to strengthen investment decisions, place the chosen investments on a sound acquisition footing, provide a better means of tracking investment progress, and improve the management and transparency of the U.S. missile defense approach in Europe, we recommend that the Secretary of Defense direct MDA’s new Director to take the following four actions: 1. Undertake robust alternatives analyses for new major missile defense efforts currently underway, including the SM-3 Block IIB, and before embarking on any other major new missile defense programs. In particular, such analyses should consider a broad range of alternatives. 2. Add risk reduction non-intercept flight tests for each new type of target missiles developed. 3. Include in its resource baseline cost estimates all life cycle costs, specifically the operations and support costs, from the military services in order to provide decision makers with the full costs of ballistic missile defense systems. 4. Stabilize the acquisition baselines, so that meaningful comparisons can be made over time that support oversight of those acquisitions. DOD provided written comments on a draft of this report. These comments are reprinted in appendix XI. DOD also provided technical comments, which were incorporated as appropriate. DOD concurred with two of our four recommendations and partially concurred with the remaining two. DOD concurred with our first recommendation to undertake robust alternatives analyses for new major missile defense efforts currently underway and before embarking on any other major new missile defense programs. However, in its response, DOD stated that MDA currently performs studies and reviews that provide outcomes similar to analyses of alternatives formally conducted by other agencies. While we recognize in our report that MDA performed some limited analyses that considered alternatives for its newer programs, we also found that these reviews cannot be considered robust analyses of alternatives, in part, because the range of alternatives considered were too narrow. Without a sufficient comparison of alternatives and focus on technical and other risks, alternatives analyses may identify solutions that are not feasible and decision makers may approve programs based on limited knowledge. While many factors can affect cost and schedule outcomes, we reported in September 2009 that programs that had a limited assessment of alternatives tended to have poorer outcomes than those that had more robust analyses of alternatives. A robust analysis of alternatives can also help ensure that key DOD and congressional decision makers understand why the chosen system was selected in order to prioritize limited investment dollars to achieve a balanced BMDS portfolio. As MDA conducts additional alternatives analyses for new programs, it is important that they be robust, comparing the costs, performance, effectiveness, and risks of a broad range of alternatives. DOD partially concurred with our second recommendation to conduct risk reduction non-intercept flight tests for each new type of target missile developed. In its response, DOD agreed that non-intercept flight tests may be conducted for each new type of target--but not necessarily on each individual target developed. DOD stated that the decision to perform a non-intercept target test must be balanced against cost, schedule, and programmatic impacts. In addition, DOD stated that MDA’s qualification tests for key target components and proven quality control processes gave the confidence necessary for the agency to plan for and launch targets for the first time as part of a system-level flight test. However, while there may be exceptions that would need to occur when there is a critical warfighter need, in general, we remain concerned about the use of undemonstrated targets during complex, expensive tests. These tests remain critical to both MDA’s development efforts and to independent assessors of missile defense performance because they are needed to demonstrate critical BMDS functions. Whenever possible, we believe MDA should avoid using undemonstrated targets, particularly for costly and complex major operational tests, because they add significant risks to those tests. DOD partially concurred with our third recommendation for MDA to include in its BMDS Accountability Report baselines the full program life cycle costs, including operations and support costs from military services. While DOD agreed that decision makers should have insight into the full life cycle costs of DOD programs, it did not identify how the full life-cycle costs could be reported to decision makers. DOD further stated that the BMDS Accountability Report should only include content for which MDA is responsible and that it did not consider the BMDS Accountability Report an appropriate forum for including military services operation and support costs for BMDS elements. However, good budgeting requires that the full costs of a project be considered when making decisions to provide resources and, therefore, both DOD and Congress would benefit from a comprehensive understanding of the full costs of MDA’s acquisition programs. DOD has reported full operation and support costs to Congress for major defense acquisition programs where one military service is leading the development of an acquisition planned to be operated by many military services. Limiting the baseline reporting for MDA acquisition programs to only MDA reported costs therefore precludes a full understanding of DOD’s acquisition commitments, particularly the resource demands on the military services that will operate and maintain the planned missile defense weapon systems. Because MDA already reports the estimated acquisition costs and some of the operation and support costs for the acquisitions in the annual BMDS Accountability Report, we believe that annual document to be the most appropriate way to report the full costs to Congress. We also continue to believe that including these costs in that report will aid both departmental and congressional decision makers as they make difficult choices of where to invest limited resources. DOD also concurred with our fourth recommendation to stabilize MDA’s acquisition baselines so that meaningful comparisons can be made over time. DOD stated in its response that MDA’s 2013 BMDS Accountability Report would contain both a one-year comparison between the current program baselines and the previously reported baselines as well as provide a longer-term comparison to the initial program baselines, when appropriate. DOD further stated that it is necessary to recognize that BMDS baselines change to respond to evolving requirements provided by other organizations and leaders, from the warfighters to the President, to counter changing threats. Finally, DOD stated that the MDA Director has authority to make these adjustments, within departmental guidelines. Our recommendation is not designed to limit the authority of the MDA Director to adjust baselines or prevent adjusting baselines when appropriate. As we reported in March 2005, a new baseline serves an important management control purpose when program goals are no longer achievable, because it presents an important perspective on the programs current status and acquisition strategy. Our recommendation to stabilize acquisition baselines is designed to address the issues we found that are within MDA’s control, such as prematurely setting baselines and decisions to move reported content between various program baselines. In order for MDA to effectively report longer-term progress of its acquisitions and provide necessary transparency to Congress, it will be critical for MDA to address these issues. We are sending copies of this report to the Secretary of Defense and to the Director of MDA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XII. To assess any progress and any remaining challenges of selecting new Ballistic Missile Defense System (BMDS) programs in which to invest, we identified two Missile defense Agency (MDA) programs that were in the initial acquisition stages, the Precision Tracking and Space System (PTSS) and the SM-3 Block IIB. For these programs, we reviewed documentation of Missile Defense Agency (MDA) and Department of Defense (DOD) reviews that program management officials considered similar to an analysis of alternatives, and compared this documentation to acquisition best practices for analysis of alternatives and DOD acquisition guidance. In addition, we examined recent legislation about a statutorily directed assessment of the PTSS and compared criteria written in the legislation to acquisition best practices for an analysis of alternatives. Finally, we interviewed MDA and DOD officials about any reviews conducted that were relevant to an analysis of alternatives. To assess any progress and any remaining challenges MDA faces in putting missile defense acquisitions on a sound development path, we reviewed MDA element acquisition strategies and compared them to our best practice criteria. To assess the extent to which MDA achieved stated acquisition goals and objectives, we reviewed the accomplishments for several Ballistic Missile Defense System elements and supporting efforts that MDA is currently developing and fielding: the Aegis Ballistic Missile Defense (Aegis BMD) with Standard Missile-3 (SM- 3) Block IB; Aegis Ashore; Aegis BMD SM-3 Block IIA; Aegis BMD SM-3 Block IIB; BMDS Sensors; Ground-based Midcourse Defense (GMD); PTSS; Targets and Countermeasures; and Terminal High Altitude Area Defense (THAAD). We reviewed data collection instruments that we submitted to several elements’ program offices. These instruments collected detailed information on schedule, cost, contracts, testing and performance, and noteworthy progress during the fiscal year. In addition, we examined Baseline and Program Execution Reviews, test schedules and reports, and production plans, where appropriate. We also discussed element- and BMDS-level testing plans and progress by meeting with officials within element program offices and MDA functional directorates, such as the Directorates for Engineering and Testing. We also examined the agency’s Integrated Master Test Plan and discussed the elements’ test programs and test results with the BMDS Operational Test Agency and DOD’s Office of the Director of Operational Test and Evaluation. To assess the progress made as well as any remaining challenges MDA faces in establishing program baselines that support oversight, we examined MDA’s reported baselines in the 2010, 2011, and 2012 BMDS Accountability Reports (BAR). We interviewed officials in MDA’s Acquisitions Directorate about how the agency is establishing and managing its internal baselines. We also met with MDA officials in the Operations Directorate to discuss their progress in adopting cost estimating best practices based on our Cost Guide. We reviewed findings from our July 2012 report, which compared MDA program schedules to best practices in schedule development. In addition, we examined DOD acquisition policy to discern how other major defense acquisition programs are required to report baselines and measure program progress. To gauge MDA element cost and schedule progress, we compared the resource and schedule baselines as presented in the 2012 BAR to the 2010 baselines presented in the June 2010 BAR. In order to present consistent cost comparisons for unit costs calculated in different years, there were instances where it was necessary to convert unit costs from base year 2010 dollars to base year 2011 dollars. We performed these conversions using indexes published by the Office of the Secretary of Defense (Comptroller) in the National Defense Budget Estimates for Fiscal Year 2011, commonly referred to as the “Green Book.” The results of our reviews are presented in detail in the element appendixes of this report and are also integrated into our findings, as appropriate. We did not present BAR schedule and cost analysis for the Aegis BMD SM-3 Block IIA, Aegis BMD SM-3 Block IIB, or PTSS programs because these programs have not yet begun MDA’s product development phase and, subsequently, do not yet present baselines in the BAR. In addition, we narrowed our assessment of the Targets and Countermeasures baselines down to two medium-range targets, the extended medium-range ballistic missile and the extended long-range air- launched target because they were originally planned to be launched for the first time in 2012. To assess any acquisition progress and any remaining challenges developing and deploying ballistic missile defense systems for the European Phased Adaptive Approach, we reviewed relevant policy and acquisition documents. In addition, we examined MDA’s Integrated Master Assessment Plan; Integrated Master Test Plan; and Master Integration Plan to determine how MDA intended to test and assess its progress in developing and fielding BMDS capabilities. We also interviewed officials within MDA’s System Assessment Office to discuss how the agency planned to assess BMDS capabilities once they had completed development. We reviewed ground and flight test reports to determine the extent to which those capabilities were meeting performance expectations. Additionally, we examined Combatant Command, BMDS Operational Test Agency, and Office of the Director, Operational Test and Evaluation assessments of the first phase of U.S. missile defense in Europe. We also interviewed officials with U.S. Strategic Command’s Joint Functional Component Command for Integrated Missile Defense and U.S. Northern Command as well as MDA program offices and MDA functional directorates about MDA’s progress in developing and deploying ballistic missile defense systems needed for the defense of Europe and the United States. We also discussed BMDS capabilities demonstrated through testing with officials in the BMDS Operational Test Agency and Office of the Director, Operational Test and Evaluation. To assess any progress and any remaining challenges in developing its models and simulations, we reviewed MDA’s Modeling and Simulation Master Plan as well as system-level verification and validation plan. We also met with MDA officials at the Missile Defense Integration and Operations Center as well as officials with the BMDS Operational Test Agency to understand the status of MDA’s modeling and simulation program, progress in resolving past issues, and future plans. Towards the end of our audit work, in March 2013, the Secretary of Defense altered the existing plans for developing and deploying missile defense systems in Europe and the United States for the protection of the United States. Specifically, the announcement canceled Phase 4 which planned to use Aegis BMD SM-3 Block IIB interceptors, and announced several other plans, including deploying additional ground based interceptors in Fort Greely, Alaska, and deploying a second AN/TPY-2 radar in Japan. In April 2013, DOD proposed canceling the PTSS and Aegis BMD SM-3 Block IIB programs in the Fiscal Year 2014 President’s Budget Submission. Because the proposed cancellations and the release of the president’s budget occurred in the last few weeks of our audit, we were not able to assess and incorporate either the proposed cancellations or the latest budget information into our report. Our work was performed at MDA locations including their headquarters in Fort Belvoir, Virginia; various program offices in Dahlgren, Virginia, Falls Church, Virginia, and Huntsville, Alabama; the GMD element in Ft. Greely, Alaska; and MDA’s Integration and Operations center in Colorado Springs, Colorado. In Fort Belvoir, Virginia, we met with officials from MDA’s System Engineering Assessment Directorate. In Dahlgren, Virginia, we spoke with officials from the Aegis BMD program office, the Aegis Ashore program office, and the Aegis SM-3 Block IIA program office. In Falls Church, Virginia, we met with officials from the PTSS program office. In Huntsville, we interviewed program officials for the BMDS Sensors, GMD; Global Deployment, THAAD; and the Targets and Countermeasures program office. At that location, we also met with officials in MDA’s Acquisition, Engineering, Test, and Cost Directorates as well as with officials in MDA’s Advanced Technology Directorate who manage the Aegis BMD SM-3 Block IIB program. We visited contractor facilities that we determined, based on MDA acquisition documentation, were working to address technical issues. These facilities are located in Huntsville, Alabama; Tucson, Arizona; Moorestown, New Jersey; and Salt Lake City, Utah. We discussed the latest GMD program test plans following flight test failures with Boeing officials in Huntsville. In addition, we met with Raytheon and Defense Contract Management Agency officials in Tucson to discuss the manufacturing of the Exoatmospheric Kill Vehicle and schedule issues for GMD, respectively. We also interviewed Raytheon officials in Tucson about various topics concerning the SM-3 Block IA, Block IB, and Block IIA programs. In Moorestown, we met with officials from Lockheed Martin to discuss the Aegis Ashore element with its SPY-1 radar. In Salt Lake City, we met with officials from Northrop Grumman to discuss their progress in addressing GMD flight test failures and their development of the new guidance system design. We also met with Combatant Commands and independent testing agencies in Colorado Springs, Colorado; Huntsville, Alabama; and Alexandria, Virginia. In Colorado Springs, we spoke with officials from U.S. Northern Command and the U.S. Strategic Command’s Joint Functional Component Command for Integrated Missile Defense. We interviewed officials from the BMDS Operational Test Agency in Huntsville to discuss MDA’s performance assessment, as well as its models and simulations. In Alexandria, Virginia, we met with the Director, Operational Test and Evaluation to discuss MDA’s test plans and results from recent testing. We conducted this performance audit from March 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. MDA slowed production of its SM-3 Block IA and Block IB in order to redesign components and incorporate fixes to address both a failure in the program’s first developmental flight test in September 2011 and a separate flight test anomaly in April 2011. In June 2012, the program executed the second of three intercept flight tests planned for fiscal your 2012, that are needed to validate SM-3 Block IB capability. In October 2012, the program was able to resume accepting deliveries. Some of the fixes to the issues from the September 2011 failed test are planned to be demonstrated in the next flight test planned during fiscal year 2013. In the past year, the program identified the root cause of the September 2011 flight test failure and incorporated fixes. The failure review team traced the cause of the failure to an abnormal performance of the third stage rocket motor during thrust pulses, which control the final maneuvers of the missile. To address this issue, the program developed a new version of the second generation Aegis weapons system to control the amount of time between the pulses. According to the Aegis program office, these changes will have minimal consequences on missile performance and ship operations. Although the program successfully re- conducted the failed flight test in May 2012, it did so prior to implementing the software modifications and altered the scenario to avoid the malfunction. The program tested the modification in a February 2013 flight test. While the intercept was successful, a thorough assessment of the test has not yet been issued. The program also determined the root cause of the anomaly in the April 2011 SM-3 Block IA flight test. MDA determined that it was caused by a component of the third stage rocket motor that is common to both the SM- 3 Block IA and SM-3 Block IB missiles. After performing a redesign of the component that caused the anomaly, the program was able to successfully flight test this new design in June 2012. In addition to verifying these fixes, the program demonstrated important new capabilities during its two successful fiscal year 2012 SM-3 Block IB flight tests. Its May 2012 flight test was the first intercept of a short-range ballistic missile target by the SM-3 Block IB with the second-generation Aegis BMD weapon system software. The test successfully demonstrated the capability to assess the success or failure of the intercept in real time, and gave the program additional insight into the improved capability of the missile to track and identify objects in space. During its June 2012 test, the missile intercepted a separating target when surrounded by debris. This test provided more insight into the missile’s enhanced ability to distinguish the lethal object from other objects. The program’s continuing concurrency has disrupted production and testing over the past 3 years. The concurrency arose because MDA prematurely ordered more SM-3 Block IB missiles than needed for development before completing developmental qualification of a key component and before confirming the missile worked as intended in flight testing. Qualification is a step in development in which a component’s performance is tested in a variety of expected conditions it may operationally encounter. Qualification of components is normally completed prior to conducting developmental flight tests and before beginning production of missiles beyond the number needed for developmental testing. We made a recommendation in our February 2010 report intended to address this concurrency risk. Since that report, as developmental issues arose, MDA had to restructure both its production and test plans. For example, MDA was forced to reduce planned SM-3 Block IB quantities from 34 to 25 in fiscal year 2011, to free up funding needed to redesign the TDACS. Then, following the failure of the first SM-3 Block IB flight test in 2011, MDA established a new program baseline, and requested an additional $149 million, in part to investigate the failure, and to implement and assess necessary modifications. MDA also reduced the 2012 procurement from 46 to 14 missiles, delayed key production milestones by a year or more, and slowed delivery of SM-3 Block IB missiles already in production, accepting only missiles necessary for testing until modifications were validated. While this decision reduced the effects of the ongoing development issues, MDA’s premature commitment to quantities beyond those needed for testing had other consequences. Slowing down the acceptance of SM-3 Block IB deliveries and reducing near-term production resulted in the need for additional investments to sustain suppliers of various SM-3 Block IB components. Additionally, once again the agency needed to extend the production of SM-3 Block IA missiles by purchasing 14 additional SM-3 Block IAs (a missile that shares many components with the SM-3 Block IB). MDA originally planned to end the production of SM-3 Block IAs in 2009 as production of the SM-3 Block IB began. However, it has needed to extend production three times in 2010, 2011 and 2012 in order to bridge the production gaps. To date, MDA has contracted for 55 more SM-3 Block IA missiles than originally planned. The TDACS qualification was completed in February 2013 after many delays and additional cost. As we reported in April 2012, MDA only partially completed qualification testing of this component before conducting the first unsuccessful SM-3 Block IB developmental flight test in September 2011. During 2012, the program experienced multiple issues completing TDACS qualification tests, including a failure in October. MDA is still determining the cause of the failure. According to program documentation, this investigation is expected to cost $27.5 million. Although completion of qualification testing had previously been delayed over a year to the fourth quarter of fiscal year 2012, the recent issues further delayed the completion to February 2013. These qualification issues are contributing to further cost growth, delays to the third flight test and preventing completion of the manufacturing readiness review. After the 2011 flight test failure occurred, MDA originally committed to completing three flight tests and a manufacturing readiness review prior to making a long lead production decision. The long lead decision begins the purchase of materials and components that must be procured earlier in order to maintain a planned production schedule. While the agency successfully completed two of those flight tests in 2012, it postponed the third, called FTM-19, to the third quarter of fiscal year 2013. The program estimates that this delay will cost the program an additional $16.7 million. MDA also held its manufacturing readiness review for the next procurement request in May 2012. This review demonstrated that the manufacturing processes were mature for most of the SM-3 Block IB components and demonstrated readiness for a manufacturing rate of two missiles per month. However, the program could not complete the review due to delays with the TDACS qualification. TDACS qualification issues have also contributed to delays for a long lead materiel production decision from December 2012 to February 2013, at an additional $19 million. In order to avoid further disruptions to the production line, MDA plans to award the next production contract for some components of 29 additional missiles in February 2013—before the third flight test can verify the most recent software modifications. According to program documentation, delaying this decision further until after the next flight test—currently slated for the third quarter of fiscal year 2013—could result in a production gap, requiring additional funding to maintain the industrial line. The program plans to award the contract for up to 29 whole missiles after the successful performance in the third quarter of fiscal year 2013 flight test. The program is at risk for costly retrofits, additional delays and further production disruptions if issues are discovered during this flight test. The Aegis program officials are planning to make further improvements to the second generation Aegis weapon system software and to develop an enhanced capability SM-3 Block IB (upgraded SM-3 Block IB) to counter advanced threats expected after 2015. The program plans to complete the necessary software and firmware upgrades in July 2014, flight test it in fiscal year 2014, and field it by the 2015 time frame. Program officials project the effort to cost an estimated $86.6 million over the course of five years. MDA’s Director approved a new baseline for the Aegis BMD SM-3 Block IB program in June 2011 and reported it in the 2012 BMDS Accountability Report (BAR). The new baseline addresses changes caused by the design modification of the TDACS. MDA reported that the average cost to produce one Aegis BMD SM-3 Block IB missile increased by 10 percent from the 2010 to the 2012 BAR because the program changed the way it funded initial spares. The program began funding initial spares and production engineering with procurement funds instead of development money. Because procurement funds are used for the production of operational assets, this accounting change increased the reported unit cost. Although this change was above the 5 percent reporting threshold that MDA established in its 2012 BAR, the $1 million dollar change in the average cost to produce a missile was not separately reported because the agency attributed this increase to an accounting change and not to real cost growth. The 2012 reported average cost to develop and produce one Aegis BMD SM-3 Block IB missile decreased by approximately 30 percent from the 2010 baseline because the total number of missiles planned increased by 219 percent. The cost decreased because of efficiencies gained by producing more each year. Figure 3 shows the unit costs as reported in the 2010 and 2012 BMDS Accountability Reports. The Aegis BMD SM-3 Block IB had a number of schedule delays caused by issues discovered in 2011 flight tests and issues discovered during qualification tests in 2012. The 2011 issues (1) contributed to delays to the program’s manufacturing readiness review and (2) affected the flight test schedule by adding one flight test in 2012 and delaying two others. Additionally, the missile’s maneuvering component failed a qualification test in 2012, which delayed the completion of its qualification program. The qualification delays coupled with the 2011 flight test issues have delayed the next flight test and the long lead production decision by over a year and delayed the initial production by two years. Figure 4 shows schedule changes made. Unit costs increased for the Aegis BMD second generation weapon system software because of decreased quantities and the inclusion of costs previously excluded. According to program officials, unit costs to upgrade to this new version of the software include installation of the software as well as hardware, such as computers and displays, to Aegis ships. The reported unit cost to upgrade to the Aegis BMD second generation weapon system software increased by over 50 percent from the originally anticipated unit cost reported in the 2010 BAR. In addition, the unit cost to both develop and upgrade to the second generation weapon system software increased by 10 percent from the reported 2010 BAR cost. The majority of these cost increases occurred between 2010 and 2011. MDA explained in 2011 that the increases to these unit costs were due to government costs, which were erroneously excluded from the 2010 unit cost calculations but were included in the 2011 BAR unit costs. In addition, the unit cost to both develop and upgrade to the second generation software increased in part because of a decrease in the number of ships receiving the installations. Figure 5 shows the unit costs as reported in the 2010 and 2012 BMDS Accountability Reports. MDA began installing the second generation Aegis Weapons System software as planned for two ships in fiscal year 2012 with only a minor schedule slip in the planned installation for a third Aegis BMD ship. After installations are complete, there will be four second generation Aegis capable ships in the fleet. Aegis Ashore is planned to be a land-based, or ashore, version of the ship-based Aegis BMD. Aegis Ashore is to track and intercept ballistic missiles in the middle of their flight using SM-3 interceptors. Key components include a vertical launching system with SM-3 missiles and an enclosure, referred to as a deckhouse, that contains the SPY-1 radar and command and control system. Aegis Ashore will share many components with the sea-based Aegis BMD and will use future versions of the Aegis weapon system that are still in development. MDA plans to equip Aegis Ashore with a modified version of the Aegis weapon system software developed jointly with the Navy as part of its modernization program. The new software is to integrate Aegis ship anti-air defense with ballistic missile defense, expanding the number of Aegis ships that are capable of ballistic missile defense. The modified version of the Aegis weapon system software that is planned for Aegis Ashore is to retain the ballistic missile defense capabilities being developed and suppress or otherwise disable the other capabilities. DOD plans to deploy Aegis Ashore in Romania with the SM-3 Block IB in the 2015 time frame and in Poland in the 2018 time frame. A total of three Aegis Ashore facilities are planned. The program is currently constructing two of these facilities—an operational facility planned for Romania and a second facility for developmental testing in Hawaii. The Romanian facility is to be constructed and undergo Aegis radar testing in New Jersey before being shipped to Romania. The Hawaiian test facility is to begin construction after the Romanian operational deckhouse construction is underway. The construction plans for the Poland Aegis Ashore site have not been finalized, but the construction could potentially begin in fiscal year 2015. Included in this appendix are analyses of the cost and schedule baselines for the Aegis weapon system modernization effort, which, while it will be used by Aegis Ashore, will also be used by Aegis BMD ships. Aegis Ashore completed key engineering design reviews in fiscal year 2012, determining that the design meets program requirements as well as cost, schedule, and reliability targets. The program successfully completed its system critical design review in December 2011. In addition, the deckhouse design was 100 percent completed in February 2012 indicating that the design may be stable and could meet requirements. In fiscal year 2012, the program began construction in New Jersey for the Romanian Aegis Ashore facility. The program had to design the facility so that it could be reconstituted—or disassembled and ready for transport to another location within 120 days. As a result, the program is using skids—which are flat surfaces on which deckhouse equipment is secured and slid into place. During fiscal year 2012, the program built the skids and started loading the equipment. Officials for the contractor told us that as of October 2012, the majority of the skids had been completed. In addition, in early fiscal year 2013 the program received congressional authorization to exchange equipment originally planned for one of the Navy’s Aegis BMD destroyers with equipment planned for Romania. Without this approval, the equipment needed for the Romanian facility would not have been ready in time. The program also awarded a contract for the Hawaiian test site in June 2012 and began site preparation and construction. While the program made significant progress in fiscal year 2012, its schedule is difficult because of extremely limited time between events. Further, according to program documentation the greatest risk to the program is meeting the established schedule particularly (1) integration testing in Hawaii and New Jersey, (2) potential shipping or transportation delays, and (3) construction delays for the operational and test facilities. Program management officials stated they are confident that they will meet the commitment to field the Romania facility by 2015. In addition, officials for the contractor stated that while they previously had concerns about the schedule, the progress made during fiscal year 2012 and the current pace of the work underway had relieved these concerns. However, there were delays in fiscal year 2012 that may affect the schedule. There was a delay in the contract award for deckhouse construction that postponed the first risk reduction flight test by one quarter to the third quarter of fiscal year 2014. The Aegis Ashore schedule contains more risk before this flight test and less risk between that test and the planned fielding in Romania. Program management officials told us they organized the schedule in this way to increase the amount of time to resolve any issues that emerge from the flight test. With the delay in the test flight, the time available to resolve issues, however, has been reduced. The Aegis Ashore program continues to follow a concurrent acquisition strategy with elevated levels of acquisition risk. We reported in April 2012 that given the plan to field Aegis Ashore by the 2015 time frame, the program’s schedule contains a high level of concurrency—buying weapon systems before they demonstrate, through testing, that they perform as required. Further, under such a strategy, problems are more likely to be discovered in production, when it is too late or very costly to correct them. The MDA Director stated in March 2012 that Aegis Ashore development is low risk because of its similarity to the sea-based Aegis BMD. However, we reported in April 2012 that the short amount of time for integrating and fielding Aegis Ashore could magnify the effects of any problems that arise. MDA recently increased the concurrency for the remaining effort. We reported in April 2012 that the first intercept test would not occur until the second half of fiscal year 2014, at which point two of the three deckhouses would already be completed, and Aegis Ashore site construction and interceptor production well under way. MDA now plans to order long-lead materials for the final Aegis Ashore site in Poland in January 2014—prior to conducting any of the developmental flight tests. Although the program is to procure materiel already used by the sea- based Aegis, committing to all three planned Aegis Ashore facilities prior to demonstrating that the system works as intended puts the program at risk for costly rework should issues be discovered during testing. Radio-frequency spectrum is used to operate both the SPY-1 radar used by Aegis BMD, as well as provide an array of wireless communication services to the civilian community, such as mobile voice and data services, radio and television broadcasting, and satellite-based services. According to guidance on spectrum management from the International Telecommunication Union, because of the potential overlap and interference between these different uses, radio-frequency is regulated by countries. In particular, given that it is a shared resource, national governments monitor and manage frequencies to prevent and eliminate harmful interference. According to the guidance, in the European Union, national standards reflect European standards and national policy is to implement European policy. In March 2011 and April 2012, we raised that Aegis Ashore faces two issues related to radio-frequency spectrum: (1) the possibility that the SPY-1 radar might interfere with host nation wireless usage; and (2) the program and the relevant host nation authorities must work together to ensure that host nations approve use of the operating frequency needed for the SPY-1 radar. Program management officials told us that they are analyzing whether or not the SPY-1 radar’s frequency usage would interfere with wireless usage in Romania. They expect to conclude this analysis in 2013. Although program management officials stated that the program could address potential frequency interference, they also stated that some potential adjustments could be costly and would have unknown effects on the radar’s operational capability. In addition, a program management official stated a preference to not make changes to Aegis Ashore or its operating frequency, both because of the cost of such changes and a desire to ensure limited differences between Aegis Ashore and Aegis BMD ships. The program has requested the use of the SPY-1 operating frequency in Romania. The program has identified this request as a top issue for Aegis Ashore. Instability in the Aegis Ashore program’s resource baseline makes it impossible to understand annual or longer-term progress by comparing the latest reported estimates to the prior year baseline or the original baseline. In order for baselines to be useful for managing and overseeing a program, they need to be stable over time so progress can be measured and so that decision makers can determine how best to allocate limited resources. The total estimated costs for an Aegis Ashore in Romania and Hawaii have increased from $813 million to $1.6 billion from the time the program first established baselines to the estimate reported in the February 2012 BMDS Accountability Report (BAR). As we reported in April 2012, MDA prematurely set the Aegis Ashore baseline in 2010 before program requirements were understood and before the acquisition strategy was firm. The program established its baseline for product development in June 2010 with a total cost estimate of $813 million. However 3 days later, when the program submitted this baseline to Congress in the 2010 BAR, it increased the total cost estimate by 19 percent to $966 million. The program attributed these changes to refined program requirements and a review of earlier estimates. Since that time, the program has repeatedly added and moved a significant amount of content to both its resource and schedule baselines to respond to acquisition strategy changes and requirements that were added after the baseline was set. For example, in the 2012 BAR, the cost to complete these efforts increased to $1.6 billion because the program added costs that were previously accounted for under another program and added costs that were a part of the program but had not been included in prior BAR baselines. In addition, the program’s unit cost baselines were significantly affected by new requirements for the program to pay for its deckhouse construction costs with development money instead of military construction funds. However, despite these changes, the resource baseline still does not include all costs associated with Aegis Ashore—such as for the Aegis Ashore adjustments needed to the Aegis BMD modernized weapon system software. The Aegis Ashore program rebaselined its estimates for the Romania and Hawaii facilities in June 2012, which resulted in a minor increase to its total cost estimate. A program management official stated that program costs will continue to change as future contracts, most of which are Navy contracts outside of the control of the program, are negotiated. However, the official stated that the 2013 BAR should report more concrete costs as more contracts will have been negotiated. In July 2012, MDA established a resource baseline for the Poland Aegis Ashore facility. The Poland baseline, with a total estimated cost to develop and procure the facility of $746 million, includes MDA operations and support, disposal, global deployment, military construction, and production and deployment costs. Based on these new baselines, the MDA reported costs of all three Aegis Ashore facilities is $2.3 billion. It remains unclear what if any costs would be borne by other DOD organizations, such as the Navy, to operate and maintain these facilities over time. MDA’s many adjustments to the Aegis Ashore schedule baseline content affected our ability to assess progress. Many new activities were added in 2012. In addition, comparing the estimated dates for scheduled activities listed in the 2012 BAR to the dates baselined in the 2010 BAR is impossible in some cases because activities from the 2010 BAR were split into multiple events, renamed, or eliminated all together in the 2012 BAR. MDA also redistributed planned activities from the Aegis Ashore schedule baselines into several other Aegis BMD schedule baselines. For example, activities related to software for Aegis Ashore were moved from the Aegis Ashore baseline and were split up and added to two other baselines for the Aegis second generation and modernized weapon system software. Rearranging content made it impossible to track the progress of some of these activities against the prior year and original baselines. While we were not able to track all of the scheduled events from prior years, there were a selected number of activities we were able to track, for which we provide an assessment below. Due to schedule pressures experienced in 2011, the program adopted a new deckhouse acquisition strategy in fiscal year 2011, in which the test deckhouse and first operational deckhouse are constructed concurrently, that changed the previous schedule for many activities. While the program was able to hold the system-level critical design review with only a quarter slip, many of the key events that were planned in fiscal years 2012 and 2013 were delayed a year or more. For example, confirmation of the deckhouse design was delayed by a year and a half to the third quarter of fiscal year 2013, and demonstrating the Aegis Ashore capability integrated in the deckhouse slipped one year to the fourth quarter of fiscal year 2013. In addition, the planned date to demonstrate Aegis Ashore’s ability to be moved to a new location and reconstituted has been delayed approximately 2.5 years to the fourth quarter of fiscal year 2015 as seen in figure 6. The Navy and MDA are developing a modernized version of the Aegis weapon system software for fleet wide use and use with Aegis Ashore. The Aegis modernized weapon system software is being developed in two versions: the first integrates the Aegis BMD second generation weapon system software with Aegis ship anti-air defense capabilities, while the second contains a Capability Upgrade to improve on the types and the numbers of ballistic missiles this system can engage. The Capability Upgrade version of the software was added to the baseline for the first time in the February 2012 BAR. Aegis Ashore is planned to initially be deployed with the Capability Upgrade version. Between the 2010 and 2012 BAR, the reported unit costs for the modernized weapon system software increased significantly, as seen in figure 7, because the estimates now include additional funds for a new software version and other efforts needed to adapt it for the Aegis Ashore. For example, the unit cost to upgrade to the modernized software increased by 29 percent and the unit cost to develop and upgrade to the modernized software increased by 33 percent from the baselines originally reported in the 2010 BAR. These unit costs increased because the total estimated development costs for the Aegis BMD modernized weapon system software increased by 30 percent to include costs for an Aegis Ashore computer program as well as the new Capability Upgrade version of the software. Although these unit cost changes were above the 5 percent reporting threshold that MDA established, they were not separately reported because the agency attributed these increases to expanded program content and not to real cost growth. In 2012, MDA consolidated the existing baseline for modernized Aegis software with activities required to adapt it to operate on land in Aegis Ashore. In addition, the original baseline was expanded to include activities for the development of the capability upgrade version of the software. Because of these changes, it was impossible to track the progress for all previously baselined activities. Selected activities we were able to track are discussed below. The modernized Aegis weapon system software program met many of its schedule goals in fiscal year 2012 and early 2013, with only small delays. For example, the program delivered the modernized weapon system software for installation on a Navy destroyer in the third quarter of fiscal year 2012, after a minor delay. The most significant delay was in the demonstration of ballistic missile defense capabilities for the system software which was delayed by almost two fiscal quarters. Specifically, the program encountered challenges with integrating the multi-mission signal processor—a key component responsible for integrating ballistic missile defense and anti-air defense capabilities so they can be executed simultaneously. The program demonstrated a full system integration of anti-air defense and ballistic missile defense capability in the first quarter of fiscal year 2013, as seen in figure 8. In March 2012, the program held and successfully completed the system preliminary design review, meaning the program was able to demonstrate that the technologies and resources available for the SM-3 Block IIA could result in a product that matched its requirements. The program completed this review after delaying it for more than one year to address technology development problems with four of its components. Although adjustments made in 2011 to recover from issues with these components increased estimated program development costs by $296 million, we reported in April 2012 that these adjustments may reduce future cost growth and reduce acquisition risk. Because the SM-3 Block IIA program is currently in MDA’s technology development phase, its efforts are primarily focused on developing and maturing its technology. During the program’s preliminary design review, several important technical issues were identified that may affect program progress and those issues increased in significance after the review. These technology challenges could lead to delays to the program’s critical design review schedule. They affect key components such as the nosecone and second and third stage rocket motors. For these issues, the program has either identified the cause, redesigned some components which it will need to test to ensure they work as intended, or determined the path to resolve the issue. In addition, the program experienced some problems in fiscal year 2012 developing the new throttleable divert and attitude control system, which has historically been a challenge for SM-3 development—particularly for the SM-3 Block IB. During fiscal year 2012, the program experienced delays obtaining a part needed for this system from one of its suppliers. Because the part has no substitute or alternate supplier, concerns were raised about the delays affecting the program schedule. However, the contractor and program are working to ensure the throttleable divert and attitude control system and its components do not affect the program schedule. Program management officials told us they are applying SM-3 Block IB program lessons learned. SM-3 Block IIA program is preparing for key decisions on integration, testing, and production after the initial cooperative development project is completed, currently scheduled for fiscal year 2017. Any decisions it makes will affect the overall program cost and timing. For example, program officials have stated that the program has not yet determined the number of development and production rounds to be produced after the first 22 development and 12 initial production rounds have been delivered. In addition, any decisions on future production plans will require negotiations with Japan since many key components on the missile are developed there. The SM-3 Block IIB is a planned Aegis BMD interceptor intended to contribute to the defense of the United States by providing the first tier of a layered defense against some intercontinental ballistic missiles. It is also expected to contribute to regional defense against medium- and intermediate-range ballistic missiles. The SM-3 Block IIB program began in June 2010 and entered the technology development phase in July 2011. Given its early stage of development, the SM-3 Block IIB does not have cost, schedule or performance baselines and is not managed within the Aegis BMD program office. Instead, this program has a tentative schedule and is being managed within MDA’s Advanced Technology office. It is gradually transitioning management to the Aegis BMD program office, a transfer that is planned to be completed by fiscal year 2015. The SM-3 Block IIB is planned to be fielded by 2022 at the earliest as part of the fourth phase of U.S. missile defense in Europe. The SM-3 Block IIB plans to use a third generation version of the Aegis Weapon System software that is still in development. Towards the end of our audit work, DOD proposed canceling of the Aegis BMD SM-3 Block IIB program in April 2013, in the Fiscal Year 2014 President’s Budget Submission. Because the proposed cancellation occurred in the last few weeks of our audit, we were not able to assess the effects of the program’s proposed cancellation and incorporate this information into our report. The fiscal year 2012 budget reduced SM-3 Block IIB funding by nearly 90 percent, from $123 million to $13 million. DOD reduced the budget in response to congressional concerns about concurrency in the program’s schedule and other concerns about the mission of the program. In order to maintain some program activities, including the work of three contractors that are developing possible concepts for the missile, the agency redirected $15 million in funds originally intended for other programs. However, to manage the program within the new budget, the program revised its schedule to delay key events—most notably, the planned initial capability of the SM-3 Block IIB, which slipped from 2020 to 2021. Program management officials stated that the initial capability has been delayed further—to 2022—due to the continuing resolution enacted in fiscal year 2012. In fiscal year 2012, the program planned to reduce concurrency by delaying product development until after a key design review is held. We reported in April 2012 that the program planned to award the contract for product development prior to holding its preliminary design review. This sequence would have committed the program to developing a product with less technical knowledge than our prior work on acquisition best practices has shown is needed, and without fully ensuring that requirements are defined, feasible, and achievable within cost, schedule, and other system constraints. DOD concurred with a recommendation we made in our April 2012 report to address this concurrency risk. The program does not yet have a final acquisition strategy, but, based on its current tentative plans, the concurrency in the program schedule has decreased. MDA adjusted the program’s tentative schedule to delay the start of product development until after the preliminary design review, a sequencing that will increase technical knowledge prior to committing to development. Further, the revised tentative schedule postpones the start of product development until fiscal year 2017, which allows the program additional time to mature key technologies. In addition, the program continues risk reduction activities—although it has had to limit its efforts to focus on key components because of fiscal year 2012 funding limitations. We reported last year that the program was using risk reduction contracts to develop technologies that could cut across versions of the SM-3. During fiscal year 2012 the program reported several significant developments related to risk reduction for the focal plane array, which is a component that helps the missile identify targets, as well as the divert and attitude control system, which maneuvers the warhead toward the target. For example, the program completed development, fabrication, and testing of the first focal plane arrays. In addition, it completed a design prototype for a third stage rocket motor that meets key SM-3 Block IIB requirements. We reported in April 2012 that these risk-reduction efforts may improve performance across the SM-3 variants. In fiscal year 2012, the program also reported benefits from competition among contractors through a better understanding of the program’s progress, possibilities for the missile, and risks associated with those possibilities. The program continues to utilize three contractors to develop concepts during the technology development phase. In April 2012, we reported on the benefits of competition among contractors, particularly the increase in technical innovation. The SM-3 Block IIB is being designed for deployment on both Aegis BMD ships and on land. The SM-3 Block IIB program has been considering missile concepts with two options for the diameter of the interceptors— either 27 or 22 inches—and two options for propellants for a maneuvering component—either liquid or solid. To be ship compatible means that the program must consider Navy needs and requirements when developing the specifications of the SM-3 Block IIB. While recent Navy decisions have allowed the program to consider a variety of options for the SM-3 Block IIB, there are associated cost and schedule risks associated with each of these missile configurations. In 2012, the Navy decided that the program could consider liquid propellants. The Navy banned the use of liquid propellants on ships in 1988 due to the potential for substantial ship damage, crew injury and loss of life from unintended explosive incidents with liquid propellants. In the summer of 2012 the Navy reaffirmed this position in regards to the SM-3 Block IIB. However, in October 2012 the Navy determined it would allow the program to develop concepts for the SM-3 Block IIB that use liquid propellants. While the Navy memo allowed these concepts to be explored in the early stages of the program, the memo was not a final decision to allow the use of liquid propellants on ships. Liquid propellants can provide performance increases, more speed and range, compared to solid propellants but at a greater safety risk. However, the Navy also stated in its summer 2012 memo, that if the program decided to use liquid propellants on a ship, an expensive and lengthy development effort would be needed to reduce the safety risks of having liquid propellants on a ship to an acceptable level. In addition, because of the technology issues associated with undertaking this effort, there would be no assurance the outcome would be successful. Further, many ship modifications will be required across multiple ship classes. In addition, the October memo stated that the Navy was open to accepting modifications to its vertical launch system, which is a missile launching system that is already installed on Aegis ships and will be used at Aegis Ashore facilities. The 27-inch diameter missile would provide more capabilities over the 22-inch diameter missile. However, it would require at least some modifications to the vertical launch system, because of its larger diameter than other missiles used by the system, which could increase costs. A smaller, 22-inch diameter missile would not require such modifications. Concept and technology development are still ongoing, and the program has not decided the diameter or type of propellant. Although the smaller 22-inch diameter missile with solid propellant would likely be a lower risk and cost option, both the Navy and MDA have noted that the capability limitations are significant. However, pursuing a larger, 27-inch diameter missile with liquid propellant, while it could provide many of the needed capabilities, might also introduce significant cost and schedule risks for the program, in part due to the safety risks associated with liquid propellants on ships. The program tentatively plans to select a configuration for its SM-3 Block IIB in fiscal year 2015. We have previously reported that the SM-3 Block IIB program did not conduct a formal analysis of alternatives (AOA) prior to beginning technology development. We were requested in 2012 to assess the extent to which an AOA was conducted for the program. AOAs provide insight into the technical feasibility and costs of alternatives by determining if a concept can be developed and produced within existing resources. Although MDA is not required to do an AOA for its programs because of its acquisition flexibilities, we have previously reported that an AOA can be a key step to ensure that new programs have a sound acquisition basis. While program management officials identified two reviews that they consider similar to an AOA, the reviews were not intended to be AOAs, and they did not address all of the key questions that would normally be included as part of an AOA. For example, the reviews did not consider the life-cycle costs for each alternative or the programmatic risks of the alternatives. Further, while the reviews did consider alternatives that could provide validated capabilities, the range of alternatives considered did not include non-Aegis missile options that could provide an additional layer defense of the United States. This narrow range of alternatives is particularly problematic because it limits the quality of the answers that can be provided for other key questions. As the program has progressed, additional analysis has led to changes in the initial program assumptions and results that suggest additional development and investment will be needed by the program to defend the U.S. homeland. MDA initially assumed that SM-3 Block IIB interceptors would be based on land at host nation facilities in Romania and Poland. However, subsequent MDA analyses demonstrated that the Romania site was not a good location from a flight path standpoint for defending the United States with the SM-3 Block IIB; and the Poland site may require the development of the ability to launch the interceptor earlier—during the boost phase of the threat missile— to be useful for defense of the United States. MDA technical analysis in 2012 concluded that a ship-based SM-3 Block IIB in the North Sea is a better location for U.S. homeland defense and it does not require launch during boost capabilities. While MDA’s initial assumption was the missile was to be land-based, the program is now requiring the SM-3 Block IIB to be ship and land compatible. To some extent, this progression has been driven by the early decision to narrow solutions to an Aegis-based missile without the benefit of a robust analysis of other alternatives. While this does not mean the SM-3 Block IIB is not a viable choice, we have previously reported that without fully exploring alternatives, programs may not achieve an optimal concept for the war fighter, are at risk for cost increases, and can face schedule delays or technology maturity challenges. The current generation of Ballistic Missile Defense System (BMDS) sensors includes the following: Sea Based X-Band (SBX) is a sea-based radar capable of tracking, discriminating, and assessing the flight of ballistic missiles. SBX primarily supports the Ground-based Midcourse Defense (GMD) system for defense of the U.S. and is considered a critical sensor for GMD, in part because it is able to provide tracking information to the GMD interceptor as it targets an incoming threat missile. SBX is docked near Hawaii when not in testing or operational status. Upgraded Early Warning Radars are U.S. Air Force early warning radars that are upgraded and integrated into the BMDS to provide sensor coverage for critical early warning, tracking, object classification, and cueing data. Upgraded Early Warning Radars are located in Beale, California; Fylingdales, United Kingdom; and Thule, Greenland. MDA awarded a contract to upgrade the early warning radars in Clear, Alaska and at Cape Cod, Massachusetts. The upgrades to the Clear and Cape Cod Early Warning Radar sites are joint MDA / Air Force projects. Both organizations are contributing funding to these sites. Cobra Dane radar is a U.S. Air Force radar located in Shemya, Alaska that has been upgraded and integrated into the BMDS to provide missile acquisition, tracking, object classification, and cueing data. Cobra Dane supports GMD for homeland defense. AN/TPY-2 is a transportable X-band high resolution radar that is capable of tracking all classes of ballistic missiles. AN/TPY-2 in the forward-based mode is capable of detecting and tracking missiles in all stages of flight to support Aegis BMD and GMD engagements and provides threat missile data to C2BMC. AN/TPY-2 in the terminal mode can track missiles in the later stages of flight to support THAAD engagements. Four AN/TPY-2 radars for use in forward-based mode are deployed to support regional defense with two in U.S. European Command, one in U.S. Pacific Command, and one in U.S. Central Command. MDA removed the SBX radar from operational status and placed it into a limited test support status beginning in 2012 due to budget concerns. Limited test support status means SBX will support BMDS flight and ground tests as appropriate, but can be recalled to active, operational status when warnings indicate a need to do so. MDA officials stated that cuts by the Office of the Secretary of Defense to MDA’s fiscal year 2013 budget required the agency to find approximately $2 billion in overall reductions. By transitioning SBX to a limited test support status, MDA officials expect to save almost $670 million in operation and maintenance costs for fiscal years 2013 through 2018. Because SBX is primarily used to support GMD’s defense of the United States, removing SBX from operational status also changes how the BMDS operates. However, MDA officials told us that SBX was developed to assist in countering a threat that has not yet manifested and therefore, from an operational standpoint, the radar is not currently needed. An official with U.S. Northern Command, which is concerned with defense of the United States, told us that the command has developed alternatives for conducting engagements without the SBX. However, U.S. Northern Command’s 2011 assessment of the BMDS notes there is a difference in how the BMDS operates without SBX, the details of which are classified. According to MDA officials, to continue operating Cobra Dane beyond 2015, when sustainment funding is schedule to end, the Air Force, with input from MDA, will need to determine whether to proceed with a service life extension plan to address sustainability concerns. Cobra Dane is a vital sensor for GMD—especially with the limited availability of SBX. MDA officials stated the Air Force and MDA would likely share the cost of this extension. However, they told us that it is unclear how many years it would extend the service life of Cobra Dane and that the agency is exploring other long-term solutions. One option is to replace Cobra Dane with a new radar although doing so is likely to be costly. One contractor- funded study estimated the life-cycle cost of a Cobra Dane replacement at approximately $1 billion. MDA officials told us for BMDS purposes, their preferred long-term solution is to replace the functions currently performed by the Cobra Dane radar with the Precision Tracking Space System (PTSS) which the agency currently expects to become fully operational in 2023. Although DOD has upgraded the Cobra Dane radar in the past, it has not yet confirmed those upgrades work in an intercept flight test. The radar’s capabilities were last demonstrated during a fly-by flight test in September 2005. The Director, Operational Test & Evaluation has reported that, due to Cobra Dane’s location and field-of view, the upgrades have been constrained to ground testing using models and simulations, and these tests were limited by the continuing lack of credibility that the models used accurately portray BMDS performance. The Director, Operational Test and Evaluation further stated that MDA would conduct a flight test in Cobra Dane’s field of view to confirm that the upgrades work as intended. MDA had originally played to complete this flight test in late fiscal year 2010, but has delayed it until fiscal year 2015. MDA is planning to procure fewer AN/TPY-2 radars than previously planned even though the recent increased focus on regional—in addition to homeland—defense makes them in high demand from various combatant commands. MDA reduced the number of AN/TPY-2 radars from 18 planned in the fiscal year 2012 budget down to 11 in the fiscal year 2013 budget. The agency decided to procure 7 fewer AN/TPY-2 radars because fewer THAAD batteries that utilize the radars are being procured, and because of budget cuts in its fiscal year 2013 budget. Currently, the last AN/TPY-2 procurement is scheduled for fiscal year 2013 and production will end in fiscal year 2015. Officials told us, however, that the agency may have some opportunities for the U.S. to procure additional AN/TPY-2 radars if additional radars are produced for sales to foreign governments in the interim. The only baselines reported for BMDS Sensors in the 2012 BMDS Accountability Report (BAR) are for the AN/TPY-2. Since the 2010 BAR baselines were established, the AN/TPY-2 program entered the initial production phase for its ninth and tenth radars and established a new baseline. However, the AN/TPY-2 reported unit costs have increased by less than 5 percent from the 2010 BAR to the 2012 BAR. The reported average cost in the 2012 BAR for MDA to produce one AN/TPY-2 is $187 million and the reported average cost to develop and produce one AN/TPY-2 is $226 million. According to its 2012 BAR, MDA did not separately report or explain cost increases that were less than the agency’s established threshold of 5 percent since the prior year reported unit cost. The AN/TPY-2 radar had some success in meeting 2012 BAR schedule goals, however, some milestones—including the assessment of a key capability—were delayed. Specifically, confirming the radar’s advanced capability to distinguish incoming threats while in terminal mode was delayed until the fourth quarter of fiscal year 2015, about four years later than originally planned. The delay was driven by revisions to MDA’s ground and flight test program and a slip of a key THAAD test designed to assess the capability in an operational environment. In addition, the program delayed Production Readiness Risk Assessments—formal assessments used to determine if production commitments can be made without incurring unacceptable risks to schedule, performance, cost, or other established criteria—for future deliveries of AN/TPY-2 radars by 2 years. The delay was due to an obsolete radar processor and difficulty in establishing a replacement for it. During fiscal year 2012, the program successfully deployed a radar to Turkey as part of the BMDS for regional defense in Europe. Also during the fiscal year, MDA reduced the total number of AN/TPY-2 radars being procured. In response to this reduction, the program accelerated the delivery schedule for two of the three AN/TPY-2 radars already in production. MDA delivered its eighth radar after a short delay and projected the next three radars to be delivered on time or ahead of schedule as seen in figure 9. The GMD program’s failure investigation and return to intercept effort has been rigorous. MDA convened a failure review board composed of independent experts to conduct an extensive investigation into the cause of the failure and perform modeling and testing to confirm the failure conditions. During the investigation, a series of ground tests were conducted to recreate and confirm the cause of failure, characterize the environment, and test materials, components and systems. According to a GMD program official, the program conducted over 50 component and subcomponent failure investigation and resolution tests. These tests focused on two primary areas of the kill vehicle—the thrusters and the guidance system. While initial ground testing could not replicate the environment in which the kill vehicle operates, the program did develop new test equipment that provided conditions similar to flight and recreated and confirmed the failure. In August 2011, the investigation attributed the failure to a guidance system fault that happened while in space that caused the kill vehicle to fail in the final seconds of the test. The investigation concluded that the guidance system required redesign and further development. The program’s continuing concurrent acquisition practices have disrupted development, testing, and production since 2010, thus delaying deliveries to the warfighter. Simultaneous with the failure investigation, MDA and its contractors undertook an effort to develop hardware and firmware solutions to return the program to intercept flight tests. These solutions were then planned to be assessed in two flight tests to determine whether they successfully addressed the shocks and vibrations the kill vehicle experiences during flight. Because the initial design solutions were developed concurrently and prior to the full understanding of the cause of failure, when developmental issues then arose, the flight tests had to be delayed and their objectives modified. As originally planned, the first non- intercept flight test was designed to demonstrate the effectiveness of the resolution efforts by testing both the new hardware and firmware in order to support a decision to resume manufacturing of kill vehicles. The test was originally scheduled for the fourth quarter of fiscal year 2011, but was not conducted until January 2013. This delay was due to difficulties developing the new firmware and concerns about the device that detonates in order to release the kill vehicle from the booster. MDA chose to modify the objectives of this test in order to prevent further delays. For example, the kill vehicle tested the new guidance system, but did not have the new firmware as originally planned. Additionally, the test was no longer designed to demonstrate that the CE-II works as intended in order to resume manufacturing. However, the test was a significant diagnostic flight test to gather data on the operational environment not achievable in ground tests. According to MDA officials, the test also included certain other components that have undergone design changes to address issues discovered in prior flight tests. MDA’s final evaluation of the January 2013 test was not available in time for our review for this report. The next planned CE-II intercept test, designed to demonstrate its capability, has been delayed from the third quarter of fiscal year 2012 by development issues. At the conclusion of our review, the exact timing and sequence of further GMD flight tests is to be determined because the flight test schedule continues to change. The December 2010 failure also delayed MDA’s broader GMD developmental flight testing. Because MDA inserted two flight tests to show that the causes of failure had been resolved, MDA had to reschedule its test plan, moving a flight test from fiscal year 2013 to 2016, and delaying a planned operational test from fiscal year 2015 until 2016. MDA also delayed completing developmental flight testing from 2021 to at least the fourth quarter of fiscal year 2022, well after the scheduled completion of CE-II manufacturing. In continuing to follow a concurrent acquisition strategy, DOD is accepting the risk that later flight tests may find issues requiring costly design changes and retrofit programs to resolve. Prior to the December 2010 flight test failure, MDA planned to complete delivery of the CE-II interceptors by the fourth quarter of fiscal year 2012. However, due to the delay in conducting the intercept test necessary to resume deliveries, the completion date has not yet been determined but, as of May 2012, they had expected to complete deliveries by 2015. According to the GMD program manager, the program will resume integration of certain kill vehicle components they have determined are not related to the failure prior to the next intercept test. GMD’s recent program disruptions are tied to the initial adoption of concurrent acquisition practices and the continuation of these practices as developmental problems occurred. In 2004, MDA committed to a highly concurrent development, production, and fielding strategy for the new interceptor, approving production before completing development of the prior version or completing development or flight testing of the new components. MDA proceeded to concurrently develop, manufacture, and deliver, starting in 2008, 12 of these interceptors even though they had not yet been successfully tested, ultimately resulting in significant delays and cost growth. The cost to demonstrate the new CE-II kill vehicle through flight testing and fix the CE-IIs already produced continues to increase. MDA planned to demonstrate the CE-II capability in January 2010 for approximately $236 million with one flight test. In April 2012 we reported that the cost had increased to $1.2 billion According to MDA, the cost growth was, in part, due to reconducting flight tests (which includes the cost of planning, test execution and range support, the target, and post-test analysis), as well as conducting failure investigations and fixing already delivered CE-II interceptors as noted in table 3. This estimate does not include the costs already expended during development of the interceptor and the target. For example, the costs of the flight tests do not include nonrecurring development costs, such as those for systems engineering and test and evaluation, among others. Often these costs were incurred many years before flight tests are conducted. Consequently, including nonrecurring development costs for both the CE-II and the targets would substantially increase the costs by hundreds of millions of dollars for each flight test and increase the overall cost outlined in table 3. The cost to demonstrate the new CE-II kill vehicle continues to grow due to the delays in conducting the next intercept test. According to the GMD program manager, although the total cost is not determined, delays in conducting the intercept test are estimated to cost about $3 million per month. According to the Department of Defense’s Report to Congress on Ground-based Midcourse Defense December 2010 Flight Test Failure and Correction Plan, MDA will fund the resolution efforts within the existing budget appropriations. The significant costs of the flight tests needed to demonstrate the failure resolution, according to this report, will be offset in large part by realigning the resources already allocated to planned testing that has not occurred. MDA will also delay new interceptor manufacturing and interceptor upgrades that were dependent on the redesigned CE-II kill vehicle. The GMD program’s reported baseline in the 2012 BMDS Accountability Report (BAR) represents activities and associated costs needed to achieve an initial defense of the United States. Although the program planned to report a new baseline in the 2013 BAR for other activities and associated costs needed for its next set of capabilities, it recently delayed this effort. Adjustments to the content of the GMD program’s resource baseline in 2012 have obscured cost progress to the extent that we are unable to assess longer-term or near-term progress. Although we have reported over the past few years that the program has experienced (1) significant technical problems, (2) production disruptions and (3) the addition of previously unplanned and costly work, the GMD total cost estimate as reported in the resource baseline has decreased from 2010 to 2012. The reported costs have decreased because the program moved activities from its initial baseline to its next undefined effort to enhance defense of the United States. By moving these activities, MDA used the funds that were freed up for failure resolution efforts instead. In addition, because the next baseline won’t be defined until after these activities have already been added to it, the additional cost for conducting these activities in the next baseline will not be identifiable. The full extent of actual cost growth may never be determined or visible for decision makers for either baseline because of these adjustments. While GMD has been able to complete some of it schedule and delivery goals, the program continued to experience challenges with its return to intercept activities, which delayed key developmental events and planned interceptor deliveries. For example, in the first quarter of fiscal year 2013, MDA completed the construction of a new power plant in Fort Greely, Alaska as seen in figure 11. The delivery was completed following a nearly two year schedule delay driven by failures identified during contractor integration testing which necessitated corrective action and additional testing. This power plant is important as it will provide an independent power source to the GMD missile fields at Fort Greely. Appendix VIII: Precision Tracking Space System (PTSS) PTSS is a space-based infrared sensor system designed to track ballistic missiles after boost and through the middle part of their flight. The operational satellite system will include a constellation of nine satellites in orbit at the same time around the earth’s equator. These satellites communicate with one another and a ground station to provide intercept- quality tracks of enemy missiles to other BMDS elements for engagement. The system is expected to expand the BMDS’s ability to track ballistic missiles by providing persistent coverage of approximately 70 percent of the earth’s surface while handling more advanced missiles and larger raid sizes than current ground and sea-based radar sensors. The PTSS program is preceded by a prior MDA demonstration effort for space-based missile tracking, the Space Tracking and Surveillance System, which continues to inform PTSS development. Since the PTSS program is in the very early stages, it does not have cost, schedule or performance baselines. The program will set baselines when it begins product development, which is scheduled for fiscal year 2014. The PTSS program is designing the acquisition to allow future adjustments, such as an increase to constellation size or changes to how the satellites communicate with the rest of the BMDS. This flexibility would permit the system to adjust to changes in the threat. The program also plans to upgrade its capabilities from tracking objects to discriminating among the objects it tracks. DOD proposed canceling the PTSS program in April 2013 in the Fiscal Year 2014 President’s Budget Submission. Because the proposed cancellation occurred in the last few weeks of our audit, we were not able to assess the effects of the program’s proposed cancellation and incorporate this information into our report. In August 2012, MDA formally defined the PTSS operational constellation as 9 satellites in orbit at the same time and established the planned launch schedule for the life of the program. The program plans to launch two laboratory-built developmental satellites in March 2018, then four industry-built satellites to achieve an initial operational capability of 6 satellites in December 2021, and finally achieve full operational capability with a 9-satellite constellation in December 2023. As part of this plan, the program expects the two laboratory-built developmental satellites to be part of the operational constellation until December 2025 when it will begin launching replacement satellites. From initial launch in 2018 to the program’s projected completion in 2040, the program plans to procure a total of 26 satellites. A recent study by the National Academy of Sciences estimated the PTSS life-cycle cost to range between $18.2 billion and $37 billion based on configurations for a 9-satellite and 12-satellite constellation. DOD’s Cost Assessment and Program Evaluation office is currently conducting an independent cost estimate for PTSS and plans to finish the assessment in April 2013. It is unclear if the independent cost estimate will include costs for additional upgrades the program could add in the future, such as improving the satellite’s discrimination capabilities or optimizing the program for observation of space objects. An Analysis of Alternatives (AOA) is an analytical study that compares the operational effectiveness, cost and risks of alternative potential solutions to address valid needs and shortfalls in operational capability. We previously reported that a robust AOA addresses some key questions, such as determining which alternatives provide validated capabilities, assessing the technical, operational, and programmatic risks for each alternative, determining the life-cycle cost for each alternative, and comparing the alternatives to one another. Although MDA is not required to complete an AOA for its programs because of its acquisition flexibilities, it has conducted a number of studies in the past related to PTSS to compare alternatives. None of these studies can be considered robust AOAs primarily because the studies considered too narrow a range of alternatives. For example, in October 2011, the U.S. Strategic Command, at the direction of the Under Secretary of Defense for Acquisition, Technology, and Logistics, conducted an assessment that compared the expected operational performance of PTSS against two other MDA sensors—the operational AN/TPY-2 radar and the developmental Airborne Infrared. While this review could be a useful source of information for a more robust AOA, the study cannot be considered a robust AOA because it assessed too narrow a range of alternatives and did not fully assess programmatic and technical risks, both of which are important aspects of a robust AOA. Although MDA has also conducted a number of studies in the past that mostly focused on follow-on concepts for MDA’s Space Tracking and Surveillance System demonstration program, none of the completed studies considered a broad range of alternatives. Partially in response to concerns raised by the National Academy of Sciences last year about the costs and benefits of the PTSS program, in January 2013, Congress required DOD to evaluate PTSS alternatives. DOD’s Cost Assessment and Program Evaluation office is currently conducting a comprehensive review of the PTSS program in response. Because the study is ongoing at the time of this review, it was not available for our review. DOD plans to complete the study in April 2013. The PTSS program faces significant technical challenges to achieving a fully operational constellation by 2023. Some of PTSS’s major components require significant development to function in the high radiation environment in which the satellites will operate. If that development is not successful, the satellite performance could be less than currently planned and the expected life of the satellites could be reduced. In addition, the program expects some level of performance reduction of the satellites on orbit because it plans to operate the satellites past their planned mission life. Although the program is developing the satellites to achieve a fully operational, nine-satellite constellation no sooner than 2023, its strategy leaves little margin for error. If these technical risks are realized, the operational performance of the constellation could be reduced, development costs could grow, and the cost to maintain the planned nine-satellite constellation could grow significantly if more frequent replacement is required. The high radiation environment in which the PTSS satellites plan to operate in is more intense than that experienced by other satellite systems and could result in reduced performance. The PTSS satellites are designed to view ballistic missiles as they fly above the earth’s horizon. To accomplish this, the satellites will orbit the earth at an altitude of approximately 930 miles above the earth’s surface. Consequently, the satellites will pass within the region of one of the earth’s radiation belts where fast moving protons and electrons can penetrate and damage sensitive satellite equipment. The Space Tracking and Surveillance System demonstration satellites currently operate in a similarly high radiation environment at an altitude of approximately 840 miles and, as we have previously reported, have experienced multiple problems as a result. Although program officials anticipate these problems to occasionally occur and have successfully recovered from all prior incidents, radiation events have affected the Space Tracking and Surveillance System satellites’ availability and contributed to an 11-month delay completing initial check-out for the satellites to reach full capability after launch. Recently, the National Aeronautics and Space Administration launched the Van Allen Probes, a space program led by the same laboratory leading the PTSS design efforts, to explore the Earth’s radiation belts. These probes may collect data that could help design PTSS components to protect them against radiation damage. The Van Allen Probes recently discovered a previously unknown, additional radiation belt, indicating a more dynamic radiation environment than previously thought. Recognizing the high radiation challenge, the PTSS program is seeking to develop ways to minimize the effects of radiation damage to components. While most of PTSS’s technologies are mature, some of the technologies with less maturity are major components of the satellite’s design. These technologies have low levels of maturity, in part, because they require radiation protection at levels that have not yet been demonstrated for those specific technologies. The program has added the development of these critical technologies to its high risk list, such as the star tracker, a component of the guidance and control system that uses stars to track its orientation, and the focal plane array, a component of the optical payload that locates and tracks enemy missiles. While some of these technologies are in early development, the program has undertaken risk reduction plans to focus development of these technologies. Because these technologies are critical, if they require additional development for radiation protection beyond what is already planned, the program could experience delays, a reduction in system performance, or a reduction in the satellites’ planned mission life. The program expects that over time, the satellites will have some reduction to their initial performance because it plans to operate satellites longer than their planned 5-year mission life. During the mission life, the PTSS satellites are expected to perform the designed functions to effectively meet the system’s performance requirements. After that 5-year period, there will be a growing likelihood that operational performance will be reduced. The program plans to launch the final satellites needed to achieve an operational nine-satellite constellation in 2023. However, it won’t launch replacement satellites for the first two satellites until 2025— nearly 8 years after they were put in orbit. In fact, all of the satellites in the constellation will be operated beyond their planned mission life with an average 8 years in orbit and in some cases, as long as 9.5 years. Historically, several DOD space systems have continued to operate several years beyond their planned mission life. For example, the Global Positioning System Block IIA satellites were designed to last an average 7.5 years but have actually lasted about twice as long. This is largely because satellites are typically designed with high levels of redundancy and other reliability measures that ensure performance over a period of time. For PTSS, employing a strategy of leaving satellites in orbit for 8 years rather than only 5 years will, in the long term, mean that MDA will purchase, produce, and launch about 16 fewer satellites through 2040 at a cost savings of several billion dollars. However, it also adds performance risk for the warfighter. Although other DOD space programs have planned for satellites to operate past their planned mission life, they usually wait until they have some on-orbit performance data from demonstration or similar previous satellites. The PTSS strategy, however, is based solely on pre-launch engineering and design analysis with assumptions that may or may not prove to be accurate. For example, the program estimates satellite reliability to gradually decrease over time assuming that random failures may occur, but components are not likely to prematurely wear out. However, if radiation risks do materialize, satellite components are much more likely to prematurely wear out. Also, program officials stated they plan to include fewer redundant measures than prior space systems to reduce cost, weight, power consumption, and design complexity. However, this may increase the likelihood the satellites will not effectively perform beyond their planned mission life. In April 2012, we reported the program’s acquisition strategy incorporated several important aspects of sound acquisition practices, such as competition and short development time frames. However, we also found that there were elevated acquisition risks tied to the concurrency— the overlap—between the development of the laboratory-built satellites and industry-built satellites. Under the previous strategy, the program planned to select a manufacturer, conduct a major review to finalize the satellite design, and authorize production of items that require a long lead time (more than 2 years) for satellites 3 and 4—all while the laboratory team develops and manufactures satellites 1 and 2. Because the industry-built satellites will be under contract before on-orbit testing of the lab-built satellites, we found that the strategy may not enable decision makers to fully benefit from the knowledge about the design to be gained from that on-orbit testing before making major commitments. In October 2012, the program approved its third acquisition strategy, revising it so that two manufacturers are initially selected rather than one. After the program has conducted the design review, the program will then select one of the manufacturers to produce satellites 3 and 4 and authorize production of long lead items. Although the revised acquisition strategy may improve collaboration between the laboratory team and industry, the concurrency risks remain unchanged. The revised strategy may improve the opportunity for collaboration between industry and the laboratory teams because the two manufacturers will be able to coordinate with the laboratory team while they are finalizing the design. However, the same concurrent activities in the previous strategy—finalizing the design and committing to long lead production for satellites 3 and 4 while developing satellites 1 and 2— continue. This approach will not enable decision makers to fully benefit from the knowledge about the design to be gained from on-orbit testing of the laboratory-built satellites before committing to the next industry-built satellites. Also, these first four satellites will be operational satellites, forming part of the operational nine satellite constellation until they are replaced between 2025 and 2027. As a result, if on-orbit testing reveals the need for hardware changes, the operational constellation will not fully benefit from those changes until the initial four satellites are replaced. MDA has chosen to demonstrate new targets for the first time during complex and costly system-level tests instead of first demonstrating them in less complex and expensive scenarios. System-level flight tests can involve multiple BMDS elements including land-, sea-, air-, and space- based sensors and one or more interceptors and can cost hundreds of millions of dollars. MDA launched a new target, its E-LRALT, for the first time as part of its first system-level integrated flight test, its most complex test to-date, in the first quarter of fiscal year 2013. MDA plans to launch two of its new eMRBMs for the first time during the agency’s first operational system-level test in the fourth quarter of fiscal year 2013. MDA’s first system-level integrated flight test, Flight Test Integrated-01, was conducted in October 2012 and was the most complex test MDA has conducted. It coordinated multiple combatant commands and missile defense elements to intercept four of five targets launched. MDA added this test as a risk reduction exercise for its planned operational test. While the E-LRALT target performed successfully, the test experienced a minor delay from September to October 2012 when the new target was unable to meet the readiness reviews for the original test deadlines. This target needed additional time to complete a series of tests of the target’s flight termination system – a safety system that terminates the booster motor’s thrust if unsafe conditions develop during flight. These tests were delayed when workmanship and test set up issues required correction and further retesting, which delayed the integration of these components into the missile. Despite the additional risk, this target was successfully launched for the first time and performed as expected during the integrated test in October 2012. MDA’s first operational system-level test, Flight Test Operational-01, is currently planned for the fourth quarter of fiscal year 2013. During this test, the agency plans to use a total of five targets, three ballistic missiles and two cruise missiles, and a variety of coordinated missile defense elements to conduct a highly complex scenario. This test is a very important integrated flight test designed to demonstrate regional capabilities of U.S. missile defense. MDA plans to use its new eMRBM target for the first time for two of the five targets during this operational test rather than using it first in a simpler and less costly risk reduction flight test. Risk reduction flight tests are normally conducted the first time a system, such as a new target, is tested in order to confirm that it works before adding other test objectives. This operational flight test has experienced between a six and nine month delay caused by weapon system issues and developmental problems associated with the eMRBM target. MDA was on a tight schedule to meet the original test date before issues arose with the air-launched target’s restraint system, which holds the target in cradles while it is launched from an aircraft cargo hold. The entire target restraint system had to be redesigned and was not finalized until August 2012. The delay in availability of this target contributed to MDA’s decision to delay this test. MDA’s contracting strategy has evolved from a single prime contractor strategy to more competitively awarded contracts for new target types. In 2003, MDA chose a single prime contractor, Lockheed Martin, to lead the acquisition of targets with what it called the Flexible Target Family approach, which used common components and shared inventory, and promised reduced acquisition time, cost savings, and increased capability. However, the approach soon proved more costly and more time-consuming than expected. Responding to congressional concern about these problems and our 2008 recommendations, MDA revised its acquisition approach in 2009, seeking to increase competition by returning to a multiple contract strategy with as many as four prime contractors—one for each separate target class. Shortly after, attempts to competitively award the first contract were canceled because the bids received were more expensive than anticipated. MDA completed a competitive award for an intermediate-range target in 2011 but otherwise continued to rely heavily on Lockheed Martin for new target types. For example, in 2011, MDA awarded three new task orders to its prime contractor for eMRBM targets, a more specialized medium-range target that will be procured in fewer quantities, and for re-entry vehicles that are interchangeable among multiple targets. MDA is now using parts of both approaches by using its prime contractor to keep some commonality among new targets it develops, and by issuing some competitive solicitations for other targets. It has recently begun to see some cost savings from the intermediate-range competition, reporting a cost of $103 million less than expected, compared to the independent government estimate it developed for that competition. In addition, MDA has continued to pursue additional competitive awards in fiscal years 2012 and 2013. MDA awarded a contract in October 2012 for the first two intercontinental ballistic missiles needed for future flight tests, and, according to a program official, also expects to award a contract for a new medium-range target in April 2013. These competitive contract decisions could offer more opportunities for cost efficiencies. MDA’s BMDS Accountability Report (BAR) reports baselines for cost and schedule. In the BAR, Targets and Countermeasures report detailed cost and schedule information for individual targets under three baselines for short-, medium-, and intermediate-range interceptors. In addition, MDA added new baselines in the 2012 BAR for common components, such as re-entry vehicles and associated objects. We focused our assessment on the new medium-range targets—the eMRBM and E-LRALT targets mentioned above. In its 2012 BAR resource baselines for Targets and Countermeasures, MDA reports an average unit cost and a non-recurring cost baseline. Non-recurring costs include the cost to design and develop a target configuration. Average unit cost is the sum of manufacturing costs for targets using research, development, test, and evaluation funding divided by the number of targets delivered. The agency reports that it uses these nonstandard unit costs because targets are modified to meet specific threat representations and are consumed in testing. In addition, no procurements funds are used to acquire targets. Although Targets and Countermeasures have reported baselines since 2010, it is no longer possible to compare the 2012 BAR reported average unit cost or non-recurring cost baselines with the original baselines set in the 2010 BAR for any of the Targets—including the eMRBM and E- LRALT targets. Unit cost baselines were affected when costs for common target components, which were previously included in the target baselines, were removed and redirected into a separate, newly created baseline for common components. In addition, the agency also changed the way it calculated its unit cost estimates for the eMRBM by adding costs incurred in previous years. Non-recurring cost baselines were also affected by removing costs for common target components, adding costs incurred in previous years, and removing support costs. The agency applied these new accounting rules retroactively to the 2011 BAR and reported the revisions in the 2012 BAR which enabled a one- year comparison. Between the retroactively adjusted 2011 BAR and the 2012 BAR, the average unit cost and non-recurring costs decreased for the E-LRALT target by 6 and 12 percent, respectively, as seen in figure 12. According to program officials, this is because actual costs for quality control and testing requirements for this missile were lower than originally estimated. The E-LRALT is manufactured by the same contractor that manufactured the short-range air-launched target that failed during a THAAD flight test in 2009. Program officials explained that the estimates reported in the 2011 BAR assumed that the extensive quality control measures and testing requirements imposed by MDA would be more costly than they ultimately were. As seen in figure 13, non-recurring and average unit costs increased for the eMRBM targets between the retroactively adjusted 2011 BAR and the 2012 BAR by 15 and 18 percent, respectively, because of increased testing requirements and a reduction in the quantity. Non-recurring costs increased because design issues with the air launch system and additional testing requirements were added to the program after it experienced development issues. The average unit cost increased solely due to a reduction in the quantity which eliminated the opportunity to purchase them more efficiently. The quantity decreased from 11 to 5 targets between the 2011 and 2012 BARs because the latest agency testing plan increased the number of intermediate-range targets and reduced the number of medium-range targets. Although the average unit cost change was above the 5 percent reporting threshold that MDA established in its 2012 BAR, the $6 million dollar change in the average unit cost of the eMRBM target was not separately reported because the agency attributed this increase solely to a quantity change and not to real cost growth. The Targets and Countermeasures program met the majority of its schedule goals to support the test program in fiscal year 2012 by successfully flying eight targets. However, the program experienced delays in delivering an E-LRALT for Flight Test Integrated-01 and delays delivering eMRBM targets as previously discussed. Appendix X: Terminal High Altitude Area Defense (THAAD) THAAD is a rapidly deployable ground-based system designed to defend against short- and medium-range ballistic missile attacks during the middle and end of their flight. A THAAD battery currently consists of 24 interceptor missiles, three launchers, a radar, a fire control and communications system, and other support equipment. Starting in fiscal year 2013, the program plans to increase each battery to 48 interceptors and six launchers. The first two batteries have been conditionally released to the Army for initial operational use. The program plans to continue production through fiscal year 2021, producing a total of 6 batteries, including 503 interceptors and 6 radars. Appendix X: Terminal High Altitude Area Defense (THAAD) still had not completed testing of the safety device for the interceptor or overcome its production issues, MDA continued its concurrent acquisition strategy by signing a production contract in early 2011 for two additional THAAD batteries. In fiscal year 2012, after a 15-month delay, THAAD was able to overcome its production issues and deliver the remainder of the interceptors needed for the first two batteries. The program resumed production in the second quarter of fiscal year 2011 after completing testing for its optical block. MDA issued a contract in July 2012 for the continued production of THAAD batteries including procurement of additional interceptors as well as manufacturing and delivery of launchers and support equipment. The program is planning to award another production contract in fourth quarter fiscal year 2013 which will continue production for a total of 320 interceptors through fiscal year 2017. After experiencing a minor delay to interceptor production during 2012, the THAAD program plans to continue interceptor deliveries as planned. The program was on track to meet the new interceptor production goals in mid 2012 when faulty memory devices were discovered on the mission computers of interceptors procured in 2010 and 2011. Though the defective parts were discovered while most interceptors were still at the contractor’s facility, the issue caused a production gap beginning in late fourth quarter fiscal year 2012. This gap put the interceptor production schedule four months behind. However, program officials have acquired and completed initial testing of the new parts and expect to recover the delays by increasing the average rate of production from three up to four interceptors per month. By making this change, they expect to be able to deliver the next set of 48 interceptors by December 2013 as scheduled. However, six interceptors with the faulty parts had already been delivered with the first two batteries and will have to be retrofitted. The Army has declared the THAAD Weapon System as safe, suitable, and supportable for Army soldiers to operate, with conditions. A list of these conditions that must be satisfied before the Army approves full materiel release has been defined. Examples of which include additional flight testing, verification of safety systems, training, and reliability improvements. Also defined are the resolution plan, funding, and estimated schedules. The program expects the last conditions to be resolved by the end of fourth quarter fiscal year 2017. Appendix X: Terminal High Altitude Area Defense (THAAD) One of the conditions that must be met to achieve full materiel release of THAAD to the Army is the incorporation of the required Thermally Initiated Venting System. The venting system is a safety feature of the interceptor that prevents the boost motor from starting or exploding in the event that the canister holding the interceptor heats up beyond a certain temperature. The program concurrently developed and tested this system while producing the fielded interceptors. After a redesign in 2011, the system is performing better in recent testing than it has in the past. Program officials say that this safety system may not meet all of the Army standards for full materiel release. However, military standards for the venting requirement are written for smaller scale systems and have never been incorporated into a system as large as THAAD. Although the program does not expect to complete all required testing of the safety system until late in the second quarter of fiscal year 2013, the program has already inserted the latest version into the interceptor production line for batteries three and four and plans to include it in all subsequent interceptors. THAAD successfully demonstrated its capability to intercept a medium- range ballistic missile for the first time during a complex, integrated test (Flight Test Integrated-01) involving multiple BMDS elements and targets in October 2012. This test provided key data to DOD test organizations to demonstrate recent upgrades to THAAD hardware and software. The test was also used to evaluate how well THAAD and other missile defense elements such as Aegis Ballistic Missile Defense, Patriot, and Command, Control, Battle Management and Communications elements work together. THAAD was operated by Army soldiers during the test, even though the overall event was considered a combined developmental and operational test. Although THAAD was able to achieve an important step by intercepting a medium-range target during the test, other capabilities have still not been demonstrated. One significant example is demonstrating the performance of the system using the radar advanced software against a complex target. The software has been implemented in the operational radar and ground tested, but will not be demonstrated in a flight test until fiscal year 2015. The THAAD program expects to have three batteries delivered to the Army before this test is complete. The program plans to deliver additional batteries while it continues conducting flight tests to verify the system’s capabilities. If the program discovers issues that need to be corrected during these later tests, it could be very costly to resolve any issues and retrofit existing inventory. Appendix X: Terminal High Altitude Area Defense (THAAD) Testing officials declared THAAD operationally effective in 2012 after it successfully conducted its first operational flight test in October 2011. We reported in April 2012 that THAAD successfully conducted this flight test and demonstrated its ability to perform, from planning through live operations, under operationally realistic conditions (within the constraints of test range safety). A February 2012 evaluation of this test and prior flight test data by the Director, Operational Test and Evaluation, concluded that the system is operationally effective, suitable, and survivable against the threats and environments tested. However, the evaluation also noted some suitability-related limitations and maintenance shortfalls as well as the need for improvements in its ability to be deployed, and its manpower and training, ease of using its software, and ability to connect and function with other systems. Army and BMDS test organizations provided data to the Army for evaluation of materiel release. Additional testing will be needed to further verify that these issues have been resolved. While the system begins to release initial batteries to the Army, flight and ground testing of THAAD continues in order to further verify system performance and other ongoing modifications. For example, while all THAAD components used in the operational test were the final major hardware and software used in the first two batteries, additional software and hardware modifications are planned for subsequent batteries. As planned hardware and software modifications are made, additional testing demonstrations are required to verify that the new software and hardware work as intended. Since the 2010 BAR baselines were reported, the THAAD program entered initial production and established a new baseline. Its resource baselines report separate unit costs for its interceptors, fire control system, and launchers. Appendix X: Terminal High Altitude Area Defense (THAAD) During the fiscal year, MDA reduced the number of THAAD batteries to be procured from nine down to six because of budget constraints. This reduction in THAAD batteries subsequently caused an increase in the unit cost to develop and produce launchers and fire controls. The cost for each unit has increased between the 2010 and 2012 BARs primarily because the development costs are shared by fewer numbers of operational systems. Because of these reductions, the unit cost to develop and produce the fire control and the launcher increased by 6 percent and 55 percent, respectively as seen in figure 15. MDA did not separately report these increases, even though they are above the 5 percent threshold that MDA established in its 2012 BAR, because they are solely attributed to the quantity change. Appendix X: Terminal High Altitude Area Defense (THAAD) these increases, even though they are above the 5 percent threshold that MDA established in its 2012 BAR, because they are largely due to the rate change. Appendix X: Terminal High Altitude Area Defense (THAAD) increased the unit cost to produce the interceptor—although some portion of the interceptor cost increase is because of the slower production rate. In fiscal year 2012 THAAD completed many of its previously delayed schedule goals. Following successful performance in an early fiscal year 2012 flight test, the program obtained a conditional materiel release from the Army in the second quarter of fiscal year 2012 after approximately a one year delay. After addressing production issues with its interceptors, the program was also able to deliver its first two THAAD batteries to the Service for operational use during the fiscal year. Prior to BAR baseline reporting, the first full battery was originally scheduled to be delivered in the fourth quarter of fiscal year 2010. It was delivered in the second quarter of fiscal year 2012 after approximately a year and a half delay. The second THAAD battery was also delivered in the second quarter of the fiscal year following a 6-month delay. THAAD is expecting delays to deliveries of the third and fourth THAAD battery ground components, as seen in figure 17, driven by a longer than expected time to negotiate and issue production contracts needed for these batteries. Additionally, the program successfully met its goal to intercept a Medium-Range Ballistic Missile for the first time in the first quarter of fiscal year 2013, following a 6-month delay, primarily driven by delays with a target delivery. Appendix X: Terminal High Altitude Area Defense (THAAD) In addition to the contact named above, David Best, Assistant Director; Brent Burris, Ivy Hübler; Meredith Allen Kimmett; Wiktor Niewiadomski; Kenneth E. Patton; John H. Pendleton; Karen Richey; Ann F. Rivlin; Brian T. Smith; Steven Stern; Robert Swierczek; Brian Tittle; Hai V. Tran; and Alyssa Weir made key contributions to this report. | Since 2002 MDA has spent approximately $90 billion to provide protection from enemy ballistic missiles by developing battle management systems, sensors that identify incoming threats, and missiles to intercept them. MDA plans to spend about $8 billion per year through 2017. For nearly a decade, we have reported on MDA's progress and challenges in developing and fielding the Ballistic Missile Defense System. GAO is mandated by law to assess the extent to which MDA has achieved its acquisition goals and objectives, as reported through acquisition baselines. This report examines the agency's progress and remaining challenges in (1) selecting new programs in which to invest; (2) putting programs on a sound development path; (3) establishing baselines that support oversight; and (4) developing and deploying U.S. missile defense in Europe for defense of Europe and the United States. To do this, GAO examined MDA's acquisition reports, analyzed baselines reported over several years to discern progress, and interviewed a wide range of DOD and MDA officials. Although the Missile Defense Agency (MDA) has made some progress, the new MDA Director faces challenges developing and deploying new systems to achieve increasingly integrated capabilities as well as supporting and upgrading deployed systems while providing decision makers in the Department of Defense (DOD) and Congress with key oversight information in an era of fiscal constraints. Challenge: Improve Investment Decisions Determining the most promising and cost effective new missile defense systems to buy--considering technical feasibility and cost--remains a challenge for MDA. While MDA has conducted some analyses that consider alternatives in selecting which acquisitions to pursue, it has not conducted robust analyses of alternatives for two of its new programs. Because of its acquisition flexibilities, MDA is not required to do so. Robust analyses, however, could be particularly useful to DOD and congressional decision makers as they decide how to manage the portfolio of missile defense acquisitions. GAO has reported in the past that without analyses of alternatives, programs may not select the best solution for the warfighter, are at risk for cost increases, and can face schedule delays. Challenge: Expand on Steps Taken to Place Investments on a Sound Footing In the past year, MDA gained important knowledge by successfully conducting several important tests, including a test to show how well its systems will operate together. MDA has also taken steps to lower the acquisition risks of two newer programs by adding more development time. However, development issues discovered after three programs prematurely committed to production continue to disrupt both interceptor production and flight test schedules. In addition, two other programs plan to make premature commitments to production before testing confirms their designs work as intended. MDA is planning to fly targets for the first time in its first operational test using several systems, adding risk that key information may not be obtained in this major test. Challenge: Ensure Program Baselines Support Oversight While MDA has made substantial improvements to the clarity of its cost and schedule baselines since first reporting them in 2010, they are still not useful for decision makers to gauge progress. For example, the information they include is not sufficiently comprehensive because they do not include operation and support costs from the military services. By not including these costs, the life cycle costs for some MDA programs could be significantly understated. Challenge: Developing and Deploying U.S. Missile Defense in Europe DOD declared the first major deployment of U.S. missile defense in Europe operational in December 2011, but MDA is faced with resolving some issues to provide the full capability and is facing delays to some systems planned in each of the next three major deployments. MDA has also struggled for years to develop the tools--the models and simulations--to credibly assess operational performance of systems before they are deployed. It recently committed to a new approach to resolve this problem. GAO makes four recommendations to DOD to ensure MDA (1) fully assesses alternatives before selecting investments, (2) takes steps to reduce the risk that unproven target missiles can disrupt key tests, (3) reports full program costs, and (4) stabilizes acquisition baselines. DOD concurred with two recommendations and partially concurred with two, stating the decision to perform target risk reduction flight tests should be weighed against other programmatic factors and that its current forum for reporting MDA program costs should not include non-MDA funding. GAO continues to believe the recommendations are valid as discussed in this report. |
contain 50,000 or more people. time in 28 years, in part, due to the downturn in the economy and high gasoline prices, before beginning to grow again in 2009. A variety of factors can affect the demand for public transit services, including: Population and demographics. According to the U.S. Census Bureau, from 2000 through 2009, the U.S. population grew by an estimated 9 percent, reaching more than 300 million. Longer life spans, a stable f rate, and immigration are among the contributing factors to this growth. The population aged 65 and over is estimated to have reached 40 million this year and this number is expected to continue growing as “baby boomers” age. During the past decade, the total fertility rate has rem stable, while the foreign-born population has increased due to immigration. In addition, in the past century, metropolitan areas, including central cities and suburbs, have experienced significant growth in population, with city suburbs growing more rapidly than central cities. In2009, an estimated 84 percent of the U.S. population lived in metropolitan areas as population, including increases in the population aged 65 and over, can increase the need for transportation options, including demand for public transit. compared with only 69 percent in 1970. Increases in the U.S. Employment and the economy. Similarly, employment rates and the stat of the economy can affect the travel choices of Americans and their public transit. During the past decade, there were two economic recessions beginning in 2001 and 2007, respectively. The 2007 recession was accompanied by high levels of unemployment and subsequent decreases in transit ridership. For example, according to the U.S. Bureauof Labor Statistics, during the 2007 recession, unemployment rose from 5 percent in January 2008 to 10.1 percent in October 2009, and has only edged down slightly to 9.6 p unemployment has been accompanied by a decrease in transit ridership, with ridership decreasing by about 4 percent in 2009 and about 3 percent in the first quarter of 2010. ercent by September 2010. This increase in Gasoline prices. The public’s reaction to increases in gasoline prices can also affect the demand for public transit. During the last decade, gasoline prices increased dramatically before falling again. After the average price of gasoline peaked at more than $4 per gallon in June and July of 2008, t he price began to rapidly drop. The average price of gasoline for 2009 was $2.35 per gallon as compared with $3.27 for 2008. Following the increa gasoline prices in 2008, transi t ridership reached record highs, before eventually declining in 2009. Federal, state, and local investment in transit has grown over the year resulting in the expansion of the nation’s public transit systems. FTA works in partnership with states and local grant recipients, such as transit agencies, to administer federal transit programs, and to provide financial, technical, and other assistance. Transit agencies also rely on a variety of other funding sources to help provide service, including assistance from state and local entities, and other sources such as passenger fares. State and local governments are ultimately responsible for executing most federal transit programs by matching and distributing federal funding and by planning, selecting, and supervising infrastructure projects in accordance with federal requirements. In addition, in some cases, financial assistance programs administered by the Federal Highway Administration (FHWA), or jointly administered by FHWA and FTA, can also be used to support transit agencies. For example, the Congestion Mitigation and Air Quality Improvement Program (CMAQ), which is jointly administered by FHWA and FTA, provides assistance to states for eligible transportation projects or programs that improve air quality and reduce congestion. States also have flexibility to transfer a limited amount of funds from other highway programs to assist transit programs, as in the case of CMAQ funds. The funding for these programs is authorized by SAFETEA-LU, which was enacted in August 2005 and expired in September 2009. Wh ile it has yet to be reauthorized, SAFETEA-LU has been extended seve times and the most recent extension will expire on December 31, 2010. Table 1 summarizes select federal transit and transit-related grant programs. From 1998 through 2008, transit ridership for agencies offering heavy rai light rail, and bus services grew more than 28 percent. During the same period, transit service grew approximately 20 percent. Transit ridership increased overall by over 28 percent from 1998 through 2008, as measured by passenger miles traveled (PMT). By mode, light rail ridership grew at a faster rate than heavy rail or bus. The high ridership growth for light rail may reflect the increase in the number of light rail systems in operation during the time period. As shown in figure 1, light rail ridership increased by nearly 87 percent (from 1.12 billion to 2.08 billion passenger miles), heavy rail ridership increased by about 37 percent (from 12.3 billion to 16.8 billion passenger miles), and bus ridership increased by about 19 percent (from 17.9 billion to 21.2 billion passenger miles). According to officials at the transit agencies we contacted, a number of factors contributed to ridership increases from 1998 through 2008, including population increases, periods of growth in employment, and increases in gasoline and parking prices. In addition, some agency officials reported taking actions they believe attracted new riders, such as expanding and enhancing their systems, adding new service, forming local partnerships, and launching marketing campaigns to increase ridership. For example, the Ann Arbor Transportation Authority, which provides bus service to Ann Arbor, Michigan, and surrounding areas, entered into partnerships with employers, including the University of Michigan, to subsidize students’ and employees’ transit costs. According to officials from the Ann Arbor Transportation Authority, the University of Michigan, and representatives from the business community, these partnerships helped to generate significant ridership growth in the city of Ann Arbor. The availability of transit service also increased steadily for heavy rail, light rail, and bus agencies, with vehicle revenue miles (VRM) increasing by approximately 20 percent from 1998 through 2008. Consistent with trends in ridership by mode, the supply of light rail service grew faster than heavy rail or bus services, which may reflect, in part, the increase in the number of light rail systems during the time period. As shown in figure 2, VRMs increased by 104 percent for light rail (from 42 million to 86 million miles), as compared with about 19 percent for heavy rail (from 549 million to 655 million miles) and 18 percent for agencies providing bus ing bus services (from 1.652 billion to 1.956 billion miles). services (from 1.652 billion to 1.956 billion miles). The relationship between transit ridership and service varied by mode. For example, heavy rail experienced the greatest discrepancy in ridership and supply of services from 1998 through 2008 compared with light rail or bus. Ridership outpaced the provision of heavy rail service by about 18 percentage points (specifically, ridership for heavy rail increased by about 37 percent while the provision of heavy rail service increased by about 19 percent). For agencies offering bus services, ridership generally seemed to keep pace with the supply of services during the same period (19 percent as compared with 18 percent growth). Transit agency officials with whom we spoke noted that bus systems can typically respond more quickly to increases in ridership demand, while heavy rail agencies face more challenges due to the capital-intensive nature of their systems and the financial investment required to increase heavy rail service. However, the availability of light rail service actually grew faster than ridership demand, partly due to light rail systems expanding during this time period. Specifically, light rail service grew by over 100 percent while ridership grew by about 87 percent from 1998 through 2008. For passengers, the disparity between ridership growth and service points to several potential effects. Passengers using transit systems with enough capacity to accommodate increases in ridership may experience a better utilized system. However, they may also experience a system that, while better utilized, has become more crowded. For passengers using transit systems without the capacity to accommodate increases in ridership, they may have experienced an overcrowded system that left passengers on the platform or curb during periods of high demand. According to officials at the transit agencies we contacted, agencies experienced varying degrees of success in responding to ridership growth from 1998 through 2008. While providing additional service, transit agency costs, including operating and capital expenses, increased from 1998 through 2008, as did transit agency revenues. However, while revenues increased overall, the share of funding sources changed; the share of federal funding remained steady while increases in state and local funding shares essentially offset declines in the share of funding from other sources, such as passenger fares. Increases in ridership and service from 1998 through 2008 were accompanied by increases in overall costs to provide transit service. Total costs, which include operating and capital expenses, for transit agencies offering heavy rail, light rail, and bus services increased by about 46 percent. While both capital and operating expenses grew, capital expenses grew at a faster rate than operating expenses for agencies during this period. Specifically, capital expenses grew by about 68 percent while operating expenses increased by over 36 percent from 1998 to 2008. The increase in capital expenses reflects, in part, the financial investment in heavy rail and light rail systems. The increase in operating costs was most noticeable for light rail systems likely due, in part, to increases in light rail service over the time period studied. Similarly, transit agency revenues increased by more than 48 percent from 1998 through 2008. Revenue sources include federal, state, local, and other funding sources, such as passenger fares. While overall transit revenues increased, the share of funding sources changed. As shown in figure 3, as a percentage of total revenues, the share of federal funding remained steady at about 17 percent. The shares of state and local funding increased (from about 18 to 22 percent and 32 to 35 percent, respectively), while the share of funds from other sources, such as passenger fares, decreased (from 34 percent to 26 percent). Increases in the share of state and local funding essentially offset declines in the share of funding from other nonfederal funding sources, such as passenger fares, from 1998 through 2008. For example, those transit systems that had to add service to accommodate growing ridership during this period, and finance the associated costs, likely used state and local funding to supplement decreases in other funding sources, including passenger fares. Since fares collected from passengers typically do not cover the full cost of their transit trips, these agencies essentially experienced a widening gap between passenger fare revenue and costs as ridership increased. This gap can significantly limit the ability of transit agencies to increase transit service in response to rising demand. In almost all cases, expanding transit service would require securing additional funding to bridge this gap. Upon closer examination of the components of transit funding sources, the shares of revenue sources for operating and capital funding differ slightly from the shares for total revenues mentioned previously. For example: Operating funding. Fare revenues were the largest source of operating funding in 1998 and 2008; however, as shown in figure 4, the share of fare revenues decreased considerably as a percentage of operating funding during this time period (from about 38 percent to 31 percent). At the same time, as a percentage of operating funding, local government contributions for operating expenses remained relatively steady (from about 29 percent to 30 percent), contributions of federal and state funding increased (from 4 to 7 percent and 20 to 26 percent, respectively), and other funding sources, such as subsidies from other sectors of operations, decreased (from 9 percent to 6 percent). According to transit agency officials at a heavy rail agency with whom we spoke, because public transit riders do not pay for the full cost of their rides through passenger fares and revenues have not kept pace with operating costs, increased ridership has strained their transit system’s operating budget. Capital funding. In 1998 the federal government was the largest source of capital investment in transit, but by 2008 this was no longer the case. Instead, local government replaced the federal government as the largest source. As shown in figure 5, from 1998 through 2008, as a percentage of capital funding, the contribution of the federal government fell (from about 50 percent to 40 percent) while the contributions of state governments remained relatively stable (at about 12 percent), and local government funding increased (from 39 percent to 47 percent). From 1998 through 2008, transit agencies faced challenges when addressing increased ridership demand. More specifically, agencies faced capacity constraints related to limitations of their vehicles (e.g., too few rail cars and buses) and system infrastructure (e.g., platforms that were too short to accommodate longer trains). In particular, several of the heavy rail, light rail, and bus agencies we interviewed experienced capacity constraints within existing vehicles as well as shortages of rail cars and buses. For example, an official with the Ann Arbor Transportation Authority said the agency did not always have the bus capacity to accommodate increased demand, sometimes resulting in overcrowding on buses. In San Francisco, the heavy rail system’s serviceable rail cars were in such high demand that they did not always have enough time to undergo sufficient maintenance, which officials said led to problems with vehicle reliability and a shortage of vehicles. TriMet, which provides light rail services to the metropolitan area of Portland, Oregon, was sometimes unable to meet demand for its services due to vehicle shortages, such as prior to opening a new rail line and new rail cars becoming available. Agency officials said that long lead times for vehicle procurements limited their ability to respond to growing demand in a timely manner, but that they eventually were able to procure additional rail cars to satisfy passenger demand on the new line. Rail car procurements generally take years to complete. We have reported that time frames of 3 to 4 years are considered quick for complete rail car procurements, and many take much longer. In addition to vehicle capacity constraints, transit agencies also faced infrastructure-related capacity challenges when addressing increased ridership demand from 1998 through 2008. Most of the agencies that reported infrastructure-related challenges from 1998 through 2008 provided heavy or light rail services. Infrastructure constraints, such as those related to stations, tracks, and other facilities, posed challenges to transit agencies. For example, from 1998 through 2008: Chicago’s heavy rail system faced challenges related to its platform capacity. Due to the platform limitations of certain heavy rail stations, Chicago Transit Authority officials could only operate six-car trains where eight-car trains would have reduced congestion. These stations’ platforms were not long enough to accommodate passengers loading and unloading from eight-car trains. As a result of capacity constraints at these stations, the agency could not always meet passenger demand or allow all passengers to board. Los Angeles County’s heavy rail system ran out of parking spaces immediately after opening parking lots at the northern end of one of its rail lines. Difficulty securing additional funds for parking structures has limited the agency’s ability to meet parking demand. Although Washington, D.C.’s, heavy rail stations were designed to accommodate eight-car trains, associated power systems initially were only equipped to handle four- and six-car trains. Therefore, upgrading the power system components so they could accommodate eight-car trains was a significant challenge that agency officials addressed during the 10- year period in which they worked to expand the system’s overall capacity. Table 2 summarizes these and other examples of infrastructure-related challenges that heavy rail and light rail agencies faced when addressing increased passenger demand from 1998 through 2008. During this time period, agencies also faced challenges related to maintaining aging infrastructure. Heavy rail agencies in particular have faced challenges related to aging infrastructure because their aging assets have increasingly needed capital reinvestments, even as ridership has grown. For example, officials from the Washington Metropolitan Area Transit Authority said the agency needed to shift its focus from new construction to maintenance during this time period, yet securing funds to maintain existing assets proved more difficult than securing funds for new projects. In addition, balancing scheduled maintenance with expanding hours of service also proved challenging. Light rail officials, such as those at Portland’s TriMet, said they recognize that managing aging infrastructure will take significantly more effort in the future. Currently, the oldest section of TriMet’s system is only 24 years old, which is relatively new in comparison with some of the nation’s oldest systems; however, agency officials have already begun capacity planning in preparation for the challenges to come during the next 20 years. Many of the transit agencies we interviewed faced budget and funding constraints. In some cases, these constraints limited their ability to increase services to accommodate additional riders. For example, from 1998 through 2008: Balancing a constrained operating budget with increased demand for services posed a challenge for Chicago’s heavy rail system. During this time period, the agency’s funding sources—including state capital bonds and general revenues—did not grow enough to fully cover the agency’s maintenance needs and personnel costs, according to transit officials. Because public transit riders typically do not pay for the full cost of their rides, increasing ridership further stressed the Chicago system’s operating budget, according to agency officials. In response, agency officials said they deferred maintenance, which in turn affected the system’s ability to meet demand due to service delays and other maintenance-related problems. Merced County Transit, which provides bus services to Merced County in California’s Central Valley, tried to improve service frequencies so that buses could run every 15 minutes instead of every hour. However, agency officials found it very difficult to improve their services and they struggled to retain local transit funds amidst competing funding needs elsewhere in the county. Agency officials ultimately compromised on their goal of increasing service to every 15 minutes and increased service instead to every 30 minutes. Since 2008, available funds have decreased as sales tax revenues and real estate values have plunged, causing transit officials to reduce or eliminate routes and reduce staff positions. Dallas Area Rapid Transit, which provides light rail services to the greater Dallas, Texas, area, is funded by a 1-cent local sales tax, which generates revenues annually. From 2001 through 2004, these sales tax revenues declined substantially, according to transit agency officials, requiring the agency to reduce its capital expansion program, use reserve funds to cover budget short falls, and make operational adjustments. As a result of transit agencies’ challenges meeting ridership demand from 1998 through 2008, some transit agencies faced the added challenge of customer dissatisfaction. For example, as a result of increased crowding on trains, customers developed less favorable opinions of Chicago’s heavy rail system and customer complaints increased, according to transit agency officials. In Ann Arbor, Michigan, transit riders were not always able to board buses during peak ridership periods and ridership studies showed that people continue to want more frequent service on some routes. To meet increased ridership demand from 1998 through 2008, transit agencies took various steps to increase the capacity and efficiency of their existing systems. These actions included making service adjustments and new system investments, in addition to maintaining their existing systems. For example, from 1998 through 2008: Service adjustments, such as extending service hours and adjusting routes, helped agencies make better use of available resources and target areas of high demand. For example, the light rail agency in Sacramento, California, extended service hours during a period of high demand in 2008 when an interstate highway in the area was under construction. During this time period, which coincided with an increase in gas prices, there was standing room only on the line that serviced that particular area and some riders could not get onto a train. In response, transit officials ran longer trains and extended service hours, thereby creating additional capacity and accommodating the increase in demand. New system investments, such as expanding vehicle fleets, extending platforms, building new stations, and adding parking, allowed agencies to accommodate more riders and improve their operations and customer service. For example, in response to challenges posed by limited space at maintenance facilities, San Francisco’s heavy rail agency expanded its maintenance facilities, which allowed the transit agency to increase its maintenance operations and, ultimately, increase the availability of serviceable rail cars. Maintaining existing systems, including vehicles and infrastructure, allowed agencies to accommodate more riders, increase the frequency of their service, and come into compliance with laws and regulations, such as the Americans with Disabilities Act of 1990, as amended. For example, transit officials at the MTA in New York City, New York, said the agency improved the heavy rail system’s signaling systems in order to sustain current levels of service and also enable the agency to increase frequency of service. Officials explained that the improved signaling system will increase capacity by allowing trains to be spaced more closely. Table 3 summarizes other examples of actions that heavy rail, light rail, and bus agencies took to address growing ridership demand from 1998 through 2008. Transit agencies experienced varying degrees of success in meeting increased ridership demand from 1998 through 2008. Most heavy rail agency officials we spoke with said they generally met growing demand, and one reported partial success in meeting demand. For example, transit agency officials in Washington, D.C., reported that although heavy rail services generally met rising demand, the agency faced challenges accommodating high demand while working to expand its system and maintain its aging assets. Community and business groups added they would like to see the city’s heavy rail capacity increased to help relieve congestion in the system and increase the reliability of service. Light rail agency officials with whom we spoke were divided about the extent to which their agencies successfully met ridership demand from 1998 through 2008. Several said they were generally successful in meeting growing demand. However, two said they either barely or inadequately met demand. For example, Sacramento’s light rail service provider reported that the agency’s service area did not keep up with the area’s growing population and housing boom from 1998 through 2008. Officials from a local agency and community group said the transit agency met demand within the city of Sacramento fairly well, and the system had enough capacity to meet those riders’ needs. However, they added that as the area developed housing and employment centers outside the downtown area, the agency was not always able to meet the needs of commuters from outlying or newer-growth areas. Nor was the agency always able to meet the needs of potential riders who chose to drive rather than use public transit due to inconvenient transfers or a shortage of transit services, according to the community group official. All five bus agencies we interviewed had limited success in meeting ridership demand. Some agencies could not add the services needed to accommodate increasing demand. Others had to turn away riders, while others reported that their ability to expand to meet the needs of emerging markets was limited. For example, a transit official from Ann Arbor’s bus agency said the agency was generally successful in meeting demand within the city of Ann Arbor, but was not as successful in surrounding communities due to funding constraints. Representatives of a local community group and intergovernmental agency added that the agency turned away riders during periods of high demand and service on many routes was too infrequent. However, local officials, as well as community and business groups, acknowledged the efforts the agency has made to respond to increased ridership demand amidst funding and resource challenges. Estimates for future population growth and other demographic trends point to potential increases in future ridership demand. According to U.S. Census Bureau projections, the U.S. population will increase by 20.4 percent from 2010 to 2030. Demographic changes point to increases in future demand as well. Trends in growing redevelopment and increased densities in the urban core, as well as continued growth of housing and employment centers near outlying suburban transit hubs, are expected to contribute to future increases in ridership demand. Additionally, increased focus on transit- oriented development around transit stations in both urban and suburban areas may also increase future ridership demand. For example, the regional planning agency in the San Francisco Bay Area anticipates a substantial amount of continued growth and redevelopment of San Francisco’s urban core. Transit agency officials also noted that while San Francisco used to be the principal destination for employers, areas outside of the city, such as Walnut Creek, Dublin, Pleasanton, and San Jose, are increasingly attracting employment centers, which has increased traffic on reverse commute routes. Furthermore, the transit agency is collaborating with others to encourage transit-oriented developments near transit stations. Property values have held steady near transit stations as compared with declines in property values in other areas. For example, according to transit agency officials, to date, property values in the city of San Francisco were barely impacted by the housing downturn, whereas areas further out with less access to transit were impacted more greatly, indicating that people are starting to see the value of living near public transit. Increases in the transportation-disadvantaged populations—those who must rely on public transit for their travel—may also increase future ridership demand. For example, according to the U.S. Census Bureau, in 2030, baby boomers aged 65 and older will comprise nearly 20 percent of all U.S. residents. Transit officials that we spoke with said that individuals may become increasingly transit-dependent as they age. Transit officials in Ithaca, New York, anticipate a peak in their senior population starting around 2020 and expect that as people retire, they may stop driving personal vehicles, which may contribute to increases in transit ridership. Also, according to transit officials in Portland, Oregon, the prominence of the aging demographic will become more noticeable as the baby boomers age “in place” (i.e., remain in the Portland metropolitan area). Over time, officials said that accommodating the aging population on bus and light rail services and providing transit services that are accessible, comfortable, and safe will be challenging but critical. However, officials added that accommodating the expected increase in seniors is an important consideration for transit agencies, especially because complementary paratransit service, the alternative for individuals unable to use fixed-route transit service, is more expensive to provide per rider. We previously reported that it is difficult for transit agencies to balance providing complementary paratransit service with the increased cost of accommodating a growing ridership. Additionally, increased densities in urban areas may increase transit-dependent populations, where transit is a mode of necessity for many city residents. In Dallas, Texas, and Frederick, Maryland, transit agency officials also noted increases in the low-income population, who rely upon transit to get to their jobs primarily within the service sector, which they anticipate will increase transit ridership demand in these areas. Transit agency officials and others with whom we spoke also identified an expectation that discretionary riders will impact future increases in ridership demand. Specifically, they expect that a younger demographic will migrate into cities and increasingly use transit, consistent with their quality-of-life preferences and environmental concerns. For example, Ann Arbor business community representatives told us that an increasingly younger workforce commutes from nearby communities where housing is cheaper and prefers to take transit. According to transit agency officials in Portland, Oregon, there is a growing younger population with certain lifestyle expectations, including the ability to walk, bike, or take transit to meet most of their transportation needs. Although transit agency officials anticipate future ridership increases, the extent of this increase is sometimes difficult to determine. We previously reported that some metropolitan planning organizations face challenges in travel demand forecasting, including a lack of technical capacity and data necessary to conduct complex transportation modeling required to meet their planning needs. Some transit agency officials with whom we spoke also noted that a lack of technical expertise and resources needed to accurately forecast future ridership growth is a challenge. According to FTA officials, difficulties transit agencies may have in assessing the demand for existing or new services could affect their ability to meet future demand. Specifically, if future ridership demand is not accurately projected, transit agencies may not make the best investment of their resources. Transit agency officials expressed concern about their agencies’ abilities to meet future increases in ridership demand for two principal reasons: increased costs and various fiscal uncertainties. Future costs for transit agencies will increase because agencies must continue to support system expansions and add capacity to accommodate for increases in ridership demand, as well as address additional expenses associated with maintaining a state of good repair for aging infrastructure. According to FTA, aging capital assets drive increasing maintenance costs and limit the ability to expand system capacity at a time of high demand. FTA has also reported that roughly one-third (29 percent) of all transit assets are in poor or marginal condition, implying that these assets are near or have already exceeded their expected useful life and need significant capital reinvestment for rehabilitation or replacement. Based on FTA’s most recent estimates, $77.7 billion is needed to bring all the nation’s transit systems into a state of good repair. In addition, an annual average of $14.4 billion would be required to maintain the systems. Officials from heavy rail and light rail agencies with whom we spoke in particular said they anticipate facing increasingly difficult challenges related to maintaining a state of good repair and operating their systems as they continue to age. For example, in Chicago, increasing ridership on the heavy rail transit system placed a significant amount of stress on the agency’s operating budget. As a result, the agency deferred maintenance, which in turn impacted its ability to meet demand due to service delays and other maintenance-related problems on the aging system. Since 2008, challenges related to the agency’s operating budget have persisted, and, starting in February 2010, the agency had to implement $100 million in service cuts to help balance its budget. Also, officials from the heavy rail agency in Washington, D.C., said the challenge of maintaining and repairing their aging system increased from 1998 through 2008, and they expect this trend to continue. Washington, D.C., transit officials said that before 1998 the agency focused on constructing and expanding a new system. In 1998, the system’s 103 miles of track had not been completely built, but the oldest part of the system was only 22 years old. However, by 2008, the oldest portion of the system was 32 years old and officials said they needed to devote significant resources to maintaining the system. As compared with the majority of the large heavy rail systems, the infrastructures of light rail systems are relatively newer. For example, the oldest section of Portland, Oregon’s, light rail system is 24 years old, as compared with the heavy rail systems in Chicago and New York which are over 100 years old. However, although officials at Portland’s transit agency said they have a robust capital maintenance program, they also said that without an influx of American Recovery and Reinvestment Act of 2009 (Recovery Act) funding in 2009, which the agency specifically targeted to help reduce a backlog of systems and vehicle maintenance, the transit agency would have fallen further behind in its maintenance needs. For NJ Transit, the light rail extension of the Newark line was financially challenging because of the line’s aging infrastructure. In order to extend the line, the agency had to upgrade the entire track and signaling system, while undergoing other maintenance-related expenses such as the maintenance and rehabilitation of transit stations and vehicles, as well as maintaining a general state of good repair of the system as a whole. Further, transit agency officials anticipated that increases in the costs associated with providing paratransit services necessitated by projected demographic changes, such as increases in the transit-dependent population, would be a challenge looking ahead. Due to operating deficits that states and localities currently face, state and local governments may not be able to continue their past level of support which may ultimately limit transit agencies’ ability to meet future increases in ridership demand. Officials from the agencies with whom we spoke said that since 2008, the economic downturn has put a strain on all sources of funding for transit agencies, particularly state and local sources of funding. We have reported that states and localities face near-term budget and long-term fiscal challenges that will grow over time. States’ revenue shortfalls have been cushioned by the temporary infusion of Recovery Act funds. For example, we found that officials in local governments used Recovery Act funds to maintain services, retain staff positions, or begin infrastructure and public works projects that otherwise would have been delayed or canceled. However, local government officials also reported they experienced revenue declines and budget gaps even after incorporating Recovery Act funds in their budgets. Officials at some localities reported that while these funds have helped to preserve services, they still faced budget deficits for the remainder of fiscal year 2010 and the next fiscal year. We also previously reported that state and local governments face increasing fiscal challenges in the next 50 years and these pressures have implications for federal programs. For example, estimates of the costs to repair, replace, or upgrade aging infrastructure so that it can safely, efficiently, and reliably meet current demands, as well as expand capacity to meet increasing demands, top hundreds of billions of dollars. The nation’s transit infrastructure is owned, funded, and operated by all levels of government. In this environment, all levels of government will compete for resources to meet the demand for infrastructure improvements, which may exceed what the nation can afford. As previously discussed, from 1998 through 2008, while overall transit revenues (including operating and capital funding) increased, increases in the share of state and local government funding offset decreases in the share of other nonfederal funding sources, such as passenger fares. In addition, while in 1998 the federal government was the largest source of capital investment in transit, by 2008 this was no longer the case. Instead, local government replaced the federal government as the largest source. However, as state and local governments are currently facing budget shortfalls, transit agency officials raised concerns that fiscal uncertainties may limit their agencies’ ability to meet future increases in ridership demand. For example, the state of California eliminated all state transit development assistance for state fiscal years 2009 and 2010 because of the state’s fiscal situation, and it has only been partially restored for 2011. Officials from Merced County Transit in California said the bus agency’s biggest challenges will be insufficient operating funds due to the elimination of state transit development assistance and a decrease in local sales tax revenue, which will not allow for any bus service expansions. Similarly, light rail officials from Sacramento Regional Transit, which also operates in California, said the agency is struggling to survive the economic downturn given a major cut in state transit assistance (which was approximately $15 million to $16 million each year and nearly 10 percent of its total operating budget), declining local sales tax revenues, and widespread state employee furloughs, which have impacted farebox revenues. Additionally, according to transit agency officials we spoke with, the uncertainty of federal funding levels with the pending surface transportation reauthorization combined with anticipated decreases in state and local funding poses challenges for long-term planning. We and others have reported on ways to more effectively deliver federal surface transportation programs that could help transit agencies address growing ridership demand amid fiscal uncertainties. While officials from all 15 transit agencies we spoke with said federal grant programs are critical to maintaining and operating their transit systems, including addressing growing ridership demand, most agency officials also said that additional federal funding would help their agencies accommodate future increases in ridership. However, the nation faces mounting fiscal difficulties and although demand on transit systems is expected to grow, increased federal financial support is not something transit agencies can count on. Therefore, the challenge is to focus the resources that are available to effectively maximize the impact on transit agencies’ services. We and others have made recommendations to Congress and others about how to restructure federal programs to better assist transit agencies and the federal government in focusing scarce resources and addressing future ridership demand, including: focusing resources on maintaining the nation’s rail and bus systems in a state of good repair; streamlining the delivery of federal grant programs and projects; and incorporating performance accountability into federal programs. A critical component of addressing future ridership demand is the need for the federal government and transit agencies to focus on transit systems’ state of good repair. When a system is not maintained in a state of good repair and needed maintenance is deferred, it is difficult to address future ridership demand because the system is not operating at optimal levels. This could ultimately lead to a loss of riders due to resulting problems, such as service delays and safety issues. According to FTA, bringing the nation’s transit system to a state of good repair, while at the same time planning for and implementing needed service expansions to accommodate demand, will be a significant challenge. Despite ongoing investment, many of the nation’s vehicles and much of its infrastructure are deteriorating. For transit riders, this deterioration eventually leads to declining service reliability. For transit operators, aging capital assets drive increasing maintenance costs and limit the ability to expand system capacity at a time of high demand. The President’s fiscal year 2011 budget request included, for FTA, a new State of Good Repair initiative for bus and rail transit agencies to bring infrastructure into a state of good repair. The proposed initiative combines two existing programs, namely the Fixed Guideway Modernization Program (49 U.S.C. § 5309(b)(2)) and the Bus and Bus Facilities Program (49 U.S.C. §§ 5309(b)(3), 5318), and would provide $2.9 billion for fiscal year 2011, an 8 percent increase over the combined programs’ fiscal year 2010 level of funding. The President has submitted his budget request to Congress. In addition, the Committee on Transportation and Infrastructure of the U.S. House of Representatives issued A Blueprint for Investment and Reform (Blueprint) in 2009, which is a summary of a proposal for the pending reauthorization of the surface transportation legislation. It focuses the majority of transit funding into four core categories, one of which is to bring urban and rural public transit systems to a state of good repair. Officials from the majority of transit agencies with whom we spoke emphasized the importance of maintaining a state of good repair in order to meet future increases in ridership demand. However, agency officials pointed out it is easier to procure additional federal funding to support new transit capital projects than to obtain funding to help maintain their existing vehicles and infrastructure. Transit agency officials explained that their agencies rely on annual federal transit formula funds to address ongoing needs, but additional federal funds available beyond those yearly allocations are focused on new capital investments as opposed to maintaining a state of good repair. Further, when asked how federal grants could be improved to better help transit agencies address ridership demand, agency officials reported that flexibility in how funding could be used, either for capital or operating purposes based on an agency’s needs, would be particularly helpful for efforts to maintain a state of good repair and other core capacity issues. Transit agency officials also indicated that if their systems’ state of good repair needs are not met and infrastructure maintenance is deferred, they will not be able to efficiently and effectively address future ridership demand. Further, the National Surface Transportation Policy and Revenue Study Commission, which was required by SAFETEA-LU to study and identify key areas for federal focus for the nation’s surface transportation system, concluded that the area of highest priority—and the foundation for all of the report’s other recommendations—was to bring the nation’s infrastructure, including transit assets, into a state of good repair. Specifically, the Commission stated that states, local governments, and other entities must develop, fund, and implement a program of asset maintenance and support over the useful life of the asset in order to assure the maximum effectiveness of federal capital support. According to FTA, currently only a few transit agencies actively maintain transit asset inventories for capital planning purposes and there is no federal reporting requirement for transit assets except for vehicles. However, FTA officials added that while some data on fixed infrastructure are collected in the NTD, they are limited in scope. FTA also noted that a comprehensive and effective asset management program could help transit agencies establish organizational state of good repair objectives, assess the magnitude of the issue, better coordinate agency planning and decision-making functions, and ultimately help transit agencies prioritize their most critical needs, especially with scarce funds for state of good repair and deferred maintenance backlogs. Additionally, the Senate report accompanying the fiscal year 2010 appropriations bill for the Department of Transportation (DOT), directed FTA to take a leadership role in improving the use of asset management practices among transit agencies. According to FTA officials, in response to this congressional direction, FTA is undertaking a new initiative to provide technical assistance and develop new data resources to help transit agencies improve their asset management practices. FTA officials added that this initiative is intended to promote a better understanding of how the industry can achieve state of good repair goals. We and others have recommended that the current federal grant approval process for large transit capital projects be simplified and streamlined to speed up project delivery and reduce costs. This includes streamlining the delivery of federal transportation grant programs such as the New Starts project planning and development approval process and the required environmental reviews. The New Starts program is the primary federal source for major transit capital investments for construction of new fixed guideway systems or extensions to existing systems. Transit agency officials indicated that New Starts funding helped their agencies address increases in ridership demand. However, officials from nearly half of the heavy and light rail transit agencies with whom we spoke also said it would be helpful if the federal grant process were more streamlined and efficient. Agency officials explained that the development and approval process for large transit capital projects can be lengthy. Further, the process can become more difficult as agencies are concurrently trying to use the finite resources they have to accommodate growing demand. In prior work, we recommended that DOT assess streamlining options, such as combining project phases, for the New Starts program. We also recommended that DOT seek legislative changes, if necessary, to implement options to expedite the New Starts process. DOT agreed with our recommendations noting that the options identified are consistent with the options that FTA has been discussing with transit stakeholders and congressional staff. However, while each option could help expedite the process, each option has advantages and disadvantages to consider. For example, each option would likely require certain trade-offs, namely, potentially reducing the level of rigor in the evaluation process in exchange for a more streamlined process. As we have previously reported, the length of the New Starts process is due, at least in part, to the rigorous and systematic evaluation and rating process required by law. The rigor of the program is intended to help FTA hold transit agencies accountable for results, maximize the benefits of each dollar invested, and ensure that the federal obligation to the project is not affected by cost and schedule overruns. Our previous work has also identified delays in the New Starts project development process due to FTA’s project management oversight. According to some project sponsors, in some cases, addressing additional oversight requirements has increased the time and resources required by the project sponsor which also increases total project costs. However, finding the right balance between protecting federal investments through project management oversight and advancing large transit capital projects through the project development process is difficult. In addition, transit agencies currently work within the statutory and regulatory constraints of the New Starts program, and streamlining can only be done within these confines or through legislative changes. The Committee on Transportation and Infrastructure of the U.S. House of Representatives’ Blueprint also proposes that the New Starts program be restructured to speed project delivery, ensure all benefits of the proposed projects are fully evaluated, and provide a level playing field for local decision making. In addition, to reduce unnecessary delays in the delivery of transit projects, it proposes that an office within FTA be created to improve the process by eliminating duplication in documentation and procedures and expediting the development of projects through the environmental review process, design, and construction. Furthermore, the National Surface Transportation Policy and Revenue Study Commission notes that overall project delivery times and costs of major transportation projects could be reduced by shortening the time to complete environmental reviews in conjunction with other measures that address conventional strategies for implementing projects once they clear environmental review. Due to the rapid increase in construction costs in recent years, delays in completing projects have become very expensive, according to the Commission. The Commission identified two sources of delay that should be addressed in the short term: redundancies in the National Environmental Policy Act of 1969 (NEPA) process and delays associated with obtaining permit approvals. We have previously reported on the time taken to conduct environmental reviews of highway projects and found that stakeholders identified various aspects of the environmental review process they believed added more time than was necessary. For example, some stakeholders said that federal agencies lacked sufficient staff to handle their workloads and that meeting certain statutory criteria are too time consuming. Another way to focus scarce resources while addressing the challenges of future ridership demand could be to incorporate greater performance and accountability into federal programs to best achieve intended outcomes. Most federal surface transportation programs lack a link between funding and the performance of a transit system or grantee. We have previously reported that federal transit grant programs—as well as highway and safety grant programs—distribute funds through formulas that are typically not linked to performance and, in many cases, have only an indirect relationship to need. Furthermore, these programs generally are not linked to the federal objectives they are intended to address, in part due to the wide discretion granted to states and localities in using most federal funds. To address these findings, we recommended that Congress consider re-examining and refocusing surface transportation programs so that they have goals with direct links to an identified federal interest and role, making grantees more accountable through more performance-based links between funding and program outcomes, among other things. In some cases, the federal government and state and local grantees may have different goals, and national priorities may not be considered by grantees even when federal funding is involved. In prior work, we also recommended that the Director of the Office of Management and Budget work with agencies and Congress to encourage the use of performance accountability mechanisms in grant design and implementation and promote knowledge transfer among agencies and grantees. As we have previously reported, performance measures should vary according to program goals and there is no “one-size-fits-all” solution— careful consideration should be taken when implementing these mechanisms. Nevertheless, we and other experts have identified key criteria for developing performance measures that could be implemented, for example, in transportation programs, including: Develop a minimum set of performance measures that can be linked to a limited number of high-level national goals and consistently applied across state and local agencies. Develop measures that demonstrate progress over time, rather than measures tied to short-term targets. Develop measures that emphasize incentives, training, and support, rather than penalties, as a preferred way to advance performance. However, some surface transportation programs are moving toward using performance measures in distributing grants. For example, the National Highway Traffic Safety Administration (NHTSA) administers the Section 408 grant program that provides funding for states’ traffic safety data systems and improvements, which better allow states to measure transportation performance. To measure performance, a state, as part of its required strategic plan, must develop goals, or desired outcomes, by which to determine program success. We have recently reported that while some federal transit programs distribute funds based partly on performance, opportunities to improve grant recipients’ performance accountability remain. For example, in November 2010 we reported that one of six formula-based FTA transit grant programs we reviewed—the Urbanized Areas Formula Grant— allocated funding, in part, based on performance—accounting for less than 5 percent of the total funding distributed through the six programs. Assuming, for example, that a federal goal was to reduce the backlog of state of good repair needs nationwide and optimize the performance of existing systems—actions which would help transit agencies meet increased passenger demand—then tracking specific outcomes through performance measures that are clearly linked to program goals could provide a strong foundation for holding grant recipients responsible for achieving federal goals. In addition, implementing links between transit funding and performance through the use of financial performance accountability mechanisms could help create incentives for transit agencies to improve their performance, and provide the means for measuring overall program performance. For example, the National Transportation Policy Project, a project of the Bipartisan Policy Center, has recommended that Congress create a Performance Bonus Program that would provide additional funds to states and metropolitan regions based on demonstrated progress toward meeting national performance goals. This program would assess how well states and metropolitan regions reduce their backlog of system preservation needs and optimize the performance of existing transit systems based on proposed performance measures. Recipients could then use Performance Bonus Program funds for any transportation purpose with few restrictions. As a corrective measure, poorly performing states and regions would be subject to greater federal scrutiny and review in the planning process for their formula funds. We recently recommended that FTA report to Congress on options for adding performance accountability mechanisms to transit grant programs to ensure efficient and effective federal transit grant programs and that FTA further analyze and use transit agency data, when applicable, for evaluating federal transit program performance. We provided DOT with a draft of this report for its review and comment. In commenting on the draft, DOT generally agreed with the information presented and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Transportation. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact David Wise at 202-512-5731 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To address how transit agencies are responding to increased passenger demand, we reviewed (1) trends in transit ridership and services from 1998 through 2008; (2) challenges, if any, that transit agencies faced during this period to address increased ridership and actions they took in response; and (3) factors that might affect future ridership demand and the ability of transit agencies to meet that demand. The NTD defines operating expenses as those expenses incurred by transit agencies that are associated with operating mass transportation services (i.e., vehicle operations, maintenance, and administration). According to the NTD, capital expenses include the following categories: revenue vehicles, guideway, communication and information systems, fare revenue collection equipment, maintenance facilities, passenger stations, administration buildings, service (nonrevenue) vehicles, and other (including passenger shelters, signs and amenities, and furniture and equipment that are not integral parts of buildings and structures). The NTD also defines capital expenses as having a useful life of greater than one year. for a detailed description of this analysis. In reviewing NTD data, we determined they were reliable for our purposes, which were to provide information on national trends in transit ridership, service, costs, and revenues from 1998 through 2008 for transit agencies offering heavy rail, light rail, and bus service. To identify challenges, transit agencies faced and the actions they took to address increased ridership, we conducted semistructured interviews with officials from 15 selected transit agencies in urbanized areas. We based our selection of these transit agencies on the type of transportation services provided (heavy rail, light rail, or bus), rate of growth in UPTs from 1998 through 2008, geographic dispersion, and size. While some of the transit agencies we interviewed may provide other types of transit services, our interviews focused on the type of transit service indicated in tables 4 and 5 (either heavy rail, light rail, or bus). For 3 of the 15 transit agencies, we visited the urbanized areas (one with each type of service— heavy rail, light rail, and bus) in which they were located and conducted in-person interviews with representatives of the transit agencies, local governments, metropolitan planning organizations, the business community, advocacy groups, and others in these three areas. Table 4 provides more detailed information about our site visit interviews. We conducted in-depth telephone interviews with officials from the remaining 12 transit agencies, as outlined in table 5. In addition, we reviewed relevant literature and agency-provided documentation, met with officials from FTA, and interviewed transportation researchers and industry and advocacy groups, including the following: National Association of City Transportation Officials We also reviewed prior GAO, Congressional Research Service, and Congressional Budget Office reports, as appropriate. To identify what factors might affect future ridership demand and the ability of transit agencies to meet that demand, we reviewed relevant literature, interviewed FTA officials, and spoke with the transit agency officials and stakeholders identified above. We also reviewed relevant documentation provided by these sources and prior GAO reports. We conducted this performance audit from December 2009 through November 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted an analysis to determine whether the heavy rail and bus system in New York City, New York, is distorting national transit trends because it comprised about one-third of unlinked passenger trips (UPT) in the United States in 2008. We examined the size of various measures of service use and output, expenses, and revenue sources. We found that, with a few exceptions, the omission or inclusion of the New York City data does not distort the national trends. Whereas in 2008, New York City comprised over one-third of the nation’s UPTs, results differ for its heavy rail and bus services. New York City’s heavy rail system accounted for about 69 percent of the nation’s heavy rail UPTs in 2008. In contrast, New York City’s buses accounted for about 17 percent of the nation’s bus UPTs in 2008. Given the difference in relative share of these modes nationally, it is unsurprising that including New York City makes a bigger difference to calculations of service use or output for heavy rail than for buses. Growth in service output as measured by vehicle revenue miles (VRM) and vehicle revenue hours (VRH) was similar when we compared total values for the United States with total values for the United States excluding New York City. Total VRM grew by about 20 percent nationwide; it grew by about 22 percent for the United States excluding New York City. Total VRH grew by about 23 percent nationwide; it grew by about 25 percent for the United States excluding New York City. Growth in service use as measured by passenger miles traveled (PMT) was similar when we compared total values for the United States with total values for the United States excluding New York City. Total PMT grew by about 28 percent nationwide and by about 25 percent for the United States excluding New York City. Growth in service use as measured by UPTs was somewhat different when we compared total values for the United States with total values for the United States excluding New York City. Total UPTs grew by about 27 percent nationwide, but only by about 18 percent for the United States excluding New York City. Service output—analyzing the data at the mode level (heavy rail and bus) there were some differences between the United States and the United States excluding New York City. For heavy rail, both VRMs and VRHs grew much more slowly in New York City as compared with the national trend. Because New York City heavy rail comprised more than half the nation’s VRMs in 2008, this disparity also showed up in the difference between heavy rail VRMs nationwide. VRMs grew by about 19 percent nationwide, whereas heavy rail in the United States excluding New York City grew by about 26 percent. For heavy rail, VRHs exhibited a similar and even wider difference in growth rates; nationwide, VRHs grew by about 21 percent. In the United States excluding New York City, VRHs grew by about 31 percent. For buses, the growth rate nationwide was similar to that of the United States excluding New York City. For VRMs, the growth rates were about 18 and 19 percent respectively; and for VRHs, the growth rates were both about 22 percent. Service use—analyzing the data at the mode level (heavy rail and bus) there were some differences between the United States and the United States excluding New York City, especially for UPTs. For heavy rail, growth measured by PMT was similar when we compared total values for the United States with total values for the United States excluding New York City. Total PMT grew by about 37 percent nationwide and by about 39 percent for the United States excluding New York City. For heavy rail, growth measured by UPTs was quite different when we compared total values for the United States with total values for the United States excluding New York City. Heavy rail UPTs grew by about 48 percent nationwide and by about 31 percent for the United States excluding New York City. For buses, the growth rates of service were quite similar nationwide as compared with the United States excluding New York City. For PMT, total PMT grew by about 19 percent nationwide and by about 17 percent for the United States excluding New York City. However, for UPTs, total UPTs grew by about 15 percent nationwide and by about 12 percent for the United States excluding New York City. In general, excluding New York City from our calculations made little difference to growth rates of operating costs either in terms of mode or function. Total growth rates were close for the United States as compared with the United States excluding New York. In the case of heavy rail, these rates were about 28 percent as compared with 26 percent, respectively, and in the case of bus these rates were about 37 percent and 35 percent, respectively. There were some differences in the vehicle maintenance category, which made a difference for bus, and in the general administration category, which made a difference for heavy rail. Capital costs may behave cyclically; for example, if rolling stock is of a common age and needs to be replaced at the same time. As a result, if New York City’s transit capital is at a different phase of its cycle (different age or amount of use) as compared with the national average, one would expect differences in trends. Total capital cost growth for all modes combined were not too different nationwide as compared with the United States excluding New York City; about 68 percent as compared with 71 percent, respectively. For heavy rail, whereas there were some differences in the growth of capital cost components, the totals were generally similar for the United States as compared with the United States excluding New York City; about 92 percent and 101 percent, respectively. For buses there were differences in capital costs for the United States as compared with the United States excluding New York City; about 5 percent and 13 percent, respectively. The primary driving factor of this difference was the approximate 58 percent reduction in capital spending for New York City. In general, there was little impact on our calculation of funding source shares nationwide as compared with the United States excluding New York City. Tables 6 through 19 provide the data from which we derived our observations about the impact New York City has on national transit trends. In addition to the individual named above, other key contributors to this report were Steve Cohen, Assistant Director; Lauren Calhoun; Jean Cook; Colin Fallon; Elba Garcia; Brandon Haller; Michael Kendix; Catherine Kim; Mary Koenen; and Joshua Ormond. | Demand for public transportation in the United States reached record highs in 2008 and rose in the decade prior to 2008. Increased demand for public transportation can create opportunities and challenges for communities working to meet demand, improve service, and maintain transit systems, while operating within budgetary constraints. Transit agencies rely on a variety of funding sources, including federal, state, and local entities, and other sources, such as fares. The U.S. Department of Transportation's (DOT) Federal Transit Administration administers federal grant programs transit agencies can use to help meet ridership demand, such as for purchasing buses and modernizing rail systems. As requested, this report addresses (1) trends in transit ridership and services from 1998 through 2008, (2) challenges, if any, transit agencies faced during this period to address increased ridership and actions they took in response, and (3) factors that might affect future ridership demand and the ability of transit agencies to meet that demand. GAO analyzed data from the National Transit Database on transit ridership (i.e., passenger miles traveled), service (i.e., vehicle revenue miles), costs, and revenues; conducted interviews with 15 transit agencies operating heavy rail, light rail, and bus; interviewed federal officials and others; and reviewed prior GAO recommendations. DOT generally agreed with the report and provided technical comments. From 1998 through 2008, the most recent year for which complete data are available, transit ridership grew at a faster rate than transit service. Heavy rail experienced the greatest difference between growth in ridership and service compared with light rail or bus--heavy rail ridership outpaced the provision of service by about 18 percentage points during this period. Transit agency costs and revenues also increased overall from 1998 through 2008, but the relative shares of revenue sources changed. The share of federal funding remained steady while increases in state and local funding shares offset declines in the share of funding from other sources, such as passenger fares. In addition, in 1998 the federal government was the largest source of capital investment in transit; by 2008 local government provided the largest share. From 1998 through 2008, transit agencies faced challenges and took actions to address increased ridership demand. Specifically, agencies faced capacity constraints, including limitations of their vehicles (e.g., too few rail cars and buses) and their system infrastructure (e.g., platforms that were too short to accommodate longer trains). Transit agencies took steps to respond to increased demand, including: adjusting their service by modifying routes, fares, and hours of service; making new system investments, such as expanding fleets and extending platforms; and maintaining and updating existing infrastructure and vehicles. For example, New York City transit officials improved the signaling in their heavy rail system to increase frequency of service. Agencies experienced varying degrees of success in responding to increases in demand--some reported accommodating increases in ridership while others' success was limited. For example, a light rail agency reported that its service area did not keep pace with real estate development, and a bus agency turned away riders. Population growth and other factors are likely to increase future ridership demand, but cost increases and fiscal uncertainties could limit transit agencies' ability to meet this demand. Transit agency officials expressed concern about meeting future increases in ridership due to increased costs of expanding transit systems and maintaining aging infrastructure. Also, transit agencies' funding has been strained since 2008, as state and local funding has decreased with the economic downturn. This is significant because transit agencies previously relied on increases in state and local funding shares to offset decreases in other sources. Given this environment, along with fiscal difficulties facing the nation, it will be a challenge to effectively focus limited resources to maximize the positive effect on transit agencies' services. GAO and others have made recommendations to DOT, Congress, and others on options that could more effectively deliver federal surface transportation programs and help transit agencies address growing ridership. These options are under consideration and include: focusing resources on state of good repair, streamlining the delivery of federal grant programs, and incorporating performance accountability measures to maximize the impact of investments. |
The Rail Passenger Service Act of 1970 created Amtrak to provide intercity passenger rail service because existing railroads found such service unprofitable. Amtrak operates a 22,000-mile network, primarily over freight railroad tracks, providing service to 46 states and the District of Columbia. (See fig. 1.) Amtrak owns 650 miles of track, primarily on the Northeast Corridor, which runs between Boston, Massachusetts, and Washington, D.C. The Northeast Corridor is the busiest passenger line in the country, and some 200 million Amtrak and commuter rail travelers use the Corridor, or some portion of it, each year. On some portions of the Corridor, Amtrak provides high-speed rail service (up to 150 miles per hour). In addition, access to the Corridor is crucial for eight commuter railroads (operated by state and local governments) that service 1.2 million passengers each work day as well as six freight railroads. At the present time, intercity passenger rail only plays a small part in the nation’s overall transportation system (with the exception of some short- distance routes). In fiscal year 2002, Amtrak served about 23.4 million passengers, or about 64,000 passengers a day. According to Amtrak, about two-thirds of its ridership is wholly or partially on the Northeast Corridor. In contrast, preliminary figures for 2002, the latest year data are available, indicate that airlines carried about 1.5 million domestic passengers per day. In 2001, intercity buses carried about 83,000 passengers per day. Amtrak has won sizeable market shares (compared to travel by air), between certain relatively close city-pairs. However, by far, most intercity traffic remains by automobile. Recent legislation introduced in the Congress has recognized the substantial capital investment required for intercity passenger rail systems. For example, legislation introduced by the Chairman of this Committee last year, the Rail Infrastructure Development and Expansion Act for the 21st Century (H.R. 2950), would have authorized the issuance of tax-exempt bonds, grants, direct loans, and loan guarantees of over $71 billion for high-speed rail infrastructure, corridor development, rehabilitation, and improvement. Legislation introduced by a Member of this Subcommittee in the current session of Congress, the National Rail Infrastructure Program Act (H.R. 1617), would establish a national rail infrastructure trust fund and make about $3 billion available to states for projects that address railroad infrastructure deficiencies in order to provide substantial public benefits, such as mitigating highway congestion and reducing transportation emissions. Projects eligible for funds under this legislation could potentially benefit intercity passenger rail systems. Legislation introduced in the Senate this session (S. 104) would authorize significant funding for passenger rail investment, including about $2 billion annually for Northeast Corridor growth investments, about $1.4 billion in capital investments, and about $1.5 billion annually for development of high-speed rail corridors. In a hearing before the House Committee on Appropriations, Subcommittee on Transportation, Treasury, and Independent Agencies, held on April 10, 2003, the President of Amtrak and the Deputy Secretary of Transportation offered differing views on Amtrak and the future of intercity passenger rail service in America. Amtrak’s President focused primarily on the importance of Amtrak’s receiving the funding it needs to improve the condition of its equipment, its reliability and utilization, and its infrastructure. The Deputy Secretary, in contrast, stated that the administration has declared principles for a fundamental restructuring of the manner in which federal assistance is provided for intercity passenger rail service. These principles include creating a rail service that is driven by sound economics, fosters competition, and establishes a long-term partnership between states and the federal government to sustain an economically viable system. Current federal funding is not sufficient to support the existing level of intercity passenger rail service being provided by Amtrak. Over the long- term, significantly higher levels of investment will be needed to stabilize the existing system and get it into a state of good repair. Amtrak has reported that just doing that will require nearly $2 billion annually over the next several years—about twice the amount provided annually over the last 5 years. The total amount of additional funding needed is not known but will likely be in the tens of billions of dollars. From fiscal year 1976 through fiscal year 2003, the federal government has provided Amtrak with over $26 billion (nominal dollars) in operating and capital subsidies. Amtrak’s financial condition has never been strong, and the corporation has been on the edge of bankruptcy several times. The Amtrak Reform and Accountability Act of 1997 required Amtrak to reach operational self- sufficiency by December 2002. However, Amtrak’s financial outlook since this legislation was enacted has remained troubled, and the corporation has gone from one financial crisis to the next. In March 1998, we reported that Amtrak’s financial condition had continued to deteriorate and that it would continue to face challenges in improving its financial health. In September 2000, we again reported that Amtrak was struggling in its quest to achieve operational self-sufficiency and that it had made limited progress in reducing its need for operating support. Amtrak’s financial struggles have become even more acute in recent years. For example, in 2001 Amtrak mortgaged a portion of Pennsylvania Station in New York City to generate enough cash to meet its expenses, and in July 2002, the Department of Transportation approved a $100 million loan because the railroad was running out of cash. As recently as a few months ago, Amtrak said that its financial and physical condition was still precarious and that federal support of about $1.8 billion would be required in fiscal year 2004 just to stabilize its system. This is about twice the approximately $1 billion in federal funding Amtrak has received annually over the last 5 years. For fiscal years 1999 through 2003, Amtrak received a total of about $4.7 billion in federal operating and capital support. Amtrak has indicated that it will require $2 billion annually in federal contributions over the next few years, with a focus on stabilizing its system. It does not address additional capital investments that might be required for enhancements or expansions of Amtrak’s system. In February 2002, Amtrak estimated that its deferred capital backlog was about $6 billion ($3.8 billion of which was attributed to the Northeast Corridor). Additional capital funds would be needed to enhance and modernize its system, such as undertaking infrastructure improvements that permit faster trip times for Amtrak’s trains. For example, in January 2000, Amtrak estimated that about $12 billion (in 2000 dollars) would be needed between fiscal years 2001 and 2025 to improve the Northeast Corridor between New York City and Washington, D.C., in order to increase the reliability of the Corridor and make enhancements that permit higher speed service. Amtrak’s share of this cost—estimated at about $6 billion— is not fully included in its expected funding request. To cover needed operating subsidies, Amtrak can be expected to need about $800 million per year, or about $4 billion over the 5-year period 2005 to 2009. This amount appears to be included within the projected request for $2 billion annually. For fiscal year 2004, Amtrak estimates that it will require about $768 million in operating subsidies—nearly 50 percent above its 2003 appropriation ($522 million). By comparison, Amtrak received about $200 million in fiscal year 2002. Operating subsidies are needed because virtually all of Amtrak routes fail to generate operating profits. For fiscal year 2002, only one of Amtrak’s routes, the Acela Express/Metroliner, earned an operating profit (about $78 million). Operating losses on other routes ranged from about $700,000 to about $77 million. Although Amtrak’s President has said that actions to maintain solvency and create a lean organization with tight financial controls have been initiated, operating a national intercity passenger rail system structured similar to Amtrak’s current system will likely require substantial operating subsidies for the foreseeable future. The amount of those operating subsidy needs, however, is unknown. Part of Amtrak’s need for operating subsidies involves Amtrak’s ability to control costs. In fiscal year 2002, Amtrak’s operating costs decreased by $76.7 million compared with fiscal year 2001. According to Amtrak, this was partially accomplished by streamlining its business and eliminating 1,000 positions. Amtrak’s President recently testified before the House Appropriations Committee that one of the challenges for Amtrak would be generating a higher level of productivity from its workforce. As we reported in 2000, Amtrak had attempted to control cost growth by improving labor productivity, but it had no measures of labor productivity for its different lines of business to measure its progress or efficiency. Amtrak is still in the process of developing these measures. Amtrak’s identified funding requests do not address the future needs that might be required to expand or enhance service or develop high-speed rail corridors. According to Amtrak, additional federal and state investment— over and above the $2 billion per year—would be required to address these issues and begin developing high-speed rail corridors. As we reported last year, the total cost to develop high-speed rail corridors is unknown because these initiatives are in various stages of planning. However, preliminary Amtrak estimates indicate the capital costs to develop these other corridors (along with the Northeast Corridor) could be between $50 billion and $70 billion over the next 20 years. The American Association of State Highway and Transportation Officials—a trade association of state and local transportation officials—also recently reported that about $60 billion would be required to develop these corridors and Amtrak’s Northeast Corridor over a 20-year period. Based on GAO’s analyses of federal investment approaches across a broad stratum of national activities, we have found several key components of a framework for evaluating federal investments. Congress may find this framework useful to consider as it develops a national intercity passenger rail policy. Components of the framework include: (1) establishing clear, nonconflicting goals, (2) establishing the roles of governmental and private entities, (3) establishing funding approaches that focus on and provide incentives for results and accountability, and (4) ensuring that the strategies developed address diverse stakeholder interests and limit unintended consequences. By clearly defining nonconflicting goals for an intercity passenger rail system, the Congress could provide a basis for guiding federal participation. Nonconflicting goals provide a clear direction, establish priorities among competing issues, specify the desired results, and lay the foundation for such other decisions as determining how the assistance will be provided, the duration of that assistance, and what the total value of the assistance should be. Such goals are best considered in the context of the relationship of an intercity passenger rail system to other transportation modes. Transportation experts highlight the need to view any part the system plays in the context of the entire transportation system in addressing congestion, mobility, and other challenges. A systemwide approach to transportation planning and funding, as opposed to focusing on a single mode or type of travel, could improve the focus on outcomes and the contribution to customer or community needs. The Congress could choose any number or type of goals when developing a national policy. For instance, it might decide that the goals should maximize some or all of the benefits of intercity passenger rail. As we reported last year, intercity passenger rail has the potential to provide broad public benefits, such as stemming increases in highway and air congestion, reducing automobile pollution, and reducing fuel consumption and energy dependency. We pointed out, however, that some of these benefits might be difficult to obtain. For instance, for rail transport to capture the market share necessary to reduce air travel congestion, the distance between potential intercity passenger rail cities must be short enough to make rail travel times competitive with air travel times (at comparable costs and levels of comfort). Amtrak’s market share decreases rapidly as travel time and distance increases. As we previously reported, compared with air service only (as most travel is by automobile), between New York City and Philadelphia and between Philadelphia and Washington, D.C.—both relatively short-distance markets—Amtrak’s market share was over 80 percent. But for longer distance markets, such as New York City to Chicago, Illinois, and Chicago, to Washington, D.C., Amtrak’s market share compared with air service was less than 10 percent. (See fig. 2.) Studies suggest that as the speed of intercity passenger rail increases, the potential benefits attributable to reductions in airport and highway delays increase, as does the potential distance over which passenger rail is able to compete with air transport. The potential for intercity passenger rail to reduce air congestion is also greater where there is little, or no, room for additional runways and where there is limited competition between airlines resulting in relatively high air fares. See appendix I for more information on potential benefits from intercity passenger rail travel. To help ensure that the goals are achieved, conflicting goals should be avoided to the maximum extent possible to reduce the possibility that achieving one goal reduces the likelihood of attaining another goal. In addition, the goals should be measurable—that is, they should identify the amount of public benefits to be obtained. Having measurable goals better assists in determining the success or failure in attaining the goals and in holding intercity passenger rail systems accountable for results. In this context, we note that the statements made by the President of Amtrak and the Deputy Secretary of Transportation on April 10 both reflect efforts to establish goals. The President of Amtrak stated that his goals over the past year were to maintain solvency, begin a program of critical capital investment, create a lean organization with tight financial controls, and build a zero-based budget. The Deputy Secretary stated that the Administration would support specific performance targets that can be met on an annual basis, and he discussed five principles articulated by the Secretary of Transportation for reforming intercity passenger rail. While these efforts are clearly important, a broader consideration of how the passenger rail system fits with other modes of transportation and how changes to the system might maximize public benefits would be a critical first step in developing intercity passenger rail policy. Establishing the relative roles of federal, state, and local governments and private sector entities, to the extent practicable, could better ensure that goals are achieved. The Deputy Secretary of Transportation touched on this issue when he stated on April 10 that the department hopes to establish a long-term partnership between the states and the federal government to support intercity passenger rail service. The President of Amtrak also described how Amtrak had entered into negotiations with state partners to have them cover 100 percent of the direct operating loss for intercity passenger rail services that receive state support. Defining roles helps to establish incentives for leadership, financial participation, risk-sharing, and accountability among the participating parties. Roles are defined not only by specific structures and organizations, but also by the forms, conditions, and terms of assistance. Regarding structures and organizations as they pertain to intercity passenger rail travel, the Congress will need to pose and resolve such questions as: Should there be a government-established entity, such as Amtrak, with a monopoly over intercity passenger rail, or could federal and state governments allow private operators to receive government assistance on a competitive basis to provide intercity passenger rail service? How much independence should the entity or entities providing rail service have to make decisions? A recent report on passenger rail restructurings in other countries stated that successful reform plans involved an increasing degree of independence of the rail entity from political influence. The Amtrak Reform Council reported in February 2002 that one of the factors influencing Amtrak’s decisionmaking and financial performance was a susceptibility to political pressure. Will routes and services be determined using a top-down approach by a central entity, such as the federal government or an organization like Amtrak, or with a bottom-up approach at a state or local level focusing on where intercity passenger rail can generate the most public benefits for particular citizens? Establishing the roles of the federal, state, and local governments will be particularly important. The federal government is currently the major financer of intercity passenger rail systems and has provided Amtrak with about $1 billion per year in federal support over the last 5 years. Although several states and localities may receive significant benefits from Amtrak’s operations, state support for Amtrak has been relatively limited—about $168 million in fiscal year 2002. One option for restructuring intercity passenger rail is to increase the role of state and local governments in financing the rail system. The ability of states to provide and maintain financial support for intercity passenger rail is unknown, however. We reported last year that most of the officials from 17 state departments of transportation we contacted were willing to provide funds for intercity passenger rail. However, they said that continued federal investment would be required, and they expressed concern over their ability to successfully form partnerships with other states to finance intercity passenger rail service. One of the potential impediments cited was determining a fair cost-sharing arrangement for capital improvements. This is consistent with what we found in our 1998 report on the potential issues of Amtrak liquidation. In that report, officials from states we spoke with also cited potential problems with compacts between states to provide intercity passenger rail service. Among the potential problems cited was reaching agreement on the allocation of costs between states. Officials from three states we spoke with that were not on the Northeast Corridor but whose states generated a large volume of intercity rail passengers also expressed concerns about (1) the potentially high cost of continuing service, (2) possible difficulties in negotiating access to tracks, and (3) lack of an incentive to continue service if Amtrak’s national route network were ended. As previously mentioned, the Amtrak Reform Council has recommended introducing competition for intercity passenger rail service. The Secretary of Transportation also supports carefully managed competition. If intercity passenger rail service were restructured to allow private rail operators to bid on the opportunity to provide service, however, those operators would still likely require operating subsidies. Four of the five private rail companies we contacted last year said that, even though they would provide efficient passenger rail service, they would still need operating subsidies. A fifth company had not yet determined if operating subsidies would be required. The choice and design of financing mechanisms, including mechanisms used to provide federal assistance, will have important consequences for performance, transparency, and accountability. A wide variety of mechanisms are available to provide financial assistance, including grants, bonds, tax subsidies, loans, loan guarantees, and user fees. Each of these vary in the extent they provide a stable source of revenue that covers capital needs, ensure that investments provide an appropriate return on investment relative to investments in other intercity transportation systems, leverage the federal dollar, and balance accountability and flexibility. These mechanisms can be structured to support or facilitate public-private partnerships. According to a recent report, a lesson learned from intercity passenger rail restructuring in other countries was that one goal of most such reforms was to increase the transparency of government financial support. In general, the intent of policy makers was to hold railroads more accountable by eliminating cross-subsidization of services. In choosing the funding mechanism, it will be important to protect the federal government’s interests. This can be done in a variety of ways. Most recently, in Amtrak’s fiscal year 2003 appropriations, the Congress adopted measures to increase the oversight and accountability over federal funds used for intercity passenger rail. These measures include requiring (1) federal funds be allocated by the Secretary of Transportation using a grant making process, and (2) Amtrak prepare and submit to the Congress a business plan and limiting federal spending on projects not contained in the plan. In addition, the conference report requires the Secretary of Transportation to vouch for the accuracy of Amtrak’s financial information. We believe these are good first steps. Other measures that are available include establishing criteria for the evaluation of projects and the use of federal funds similar to that used by the Federal Transit Administration in its New Starts program, incorporating accountability requirements similar to those in the Government Performance and Results Act, and requiring intercity passenger operators to assume some level of financial risk in their operations. Finally, it will be important to consider diverse stakeholder interests in developing intercity passenger rail policy and limit unintended consequences. Revising the structure of intercity passenger rail could have substantial effects on a number of stakeholders, including Amtrak and its employees, the railroad retirement and unemployment systems, commuter railroads, states, and freight railroads. Amtrak, its employees and creditors, and the railroad retirement and unemployment systems all have substantial financial involvement with Amtrak and could be the most directly affected by a change in intercity passenger rail policy, particularly if Amtrak were to be liquidated. At the request of this Committee, we have reported on the potential costs that might emerge if Amtrak were liquidated. We take no position on whether Amtrak should be liquidated but our work shows that there could be substantial financial issues associated with such an action. We reported that if Amtrak had been liquidated on December 31, 2001, secured and unsecured creditors, along with Amtrak’s stockholders, would have had about $44 billion in claims against Amtrak’s estate. The federal government would have been by far the largest claimant. However, it is not likely these claims would have been fully satisfied since, aside from the Northeast Corridor, the value of Amtrak’s assets would have been less than the claims against them. Amtrak liquidation would also have affected the railroad retirement and unemployment systems. Appendix II provides additional information on the financial implications of a potential liquidation. Stakeholders such as commuter railroads, states, and freight railroads could also be significantly affected by a change in policy. Commuter railroads in the Northeast could be especially affected since Amtrak’s Northeast Corridor is a vital piece of infrastructure that handles about 1,200 Amtrak, commuter, and freight trains a day. Since commuter railroads are by far the heaviest users of the Northeast Corridor and depend on this corridor to bring, on average, about 1.2 million passengers a day into major cities, it will be important to deal with this corridor carefully. As previously mentioned, state concerns largely focus on costs to provide intercity passenger rail service as well as access rights to freight railroad tracks and the cost of this access. How these issues are handled could materially affect state decisions concerning whether to support intercity passenger rail. Finally, freight railroads are concerned about the degree to which intercity passenger rail affects their ability to serve their customers and earn profits. Increased conventional or high-speed passenger rail service could severely affect their operations. While the various stakeholders may all be able to share a general vision of the intercity passenger rail system, they may diverge in their priorities. Policy changes, if not thoroughly thought through, could have unintended and disagreeable consequences for one or more of these stakeholders. In summary, Mr. Chairman, intercity passenger rail continues to be at a crossroads. Maintaining the current approach will likely require substantial federal operating and capital support—but at much higher levels than currently provided. It will be important to consider a systemwide approach for considering how the passenger rail system fits with other modes of transportation. Alternative approaches to providing intercity passenger rail service may be available that can provide public benefits and complement other modes of transportation as an integrated part of the national transportation network. Such approaches will undoubtedly require a substantial political and financial commitment over an extended period of time. When Japan restructured its intercity passenger rail system in the late 1980s and 1990s, for example, the reform plan was carried out over a decade and two political administrations. The framework I have described today is meant to help the Congress as it asks some fundamental questions about the future of intercity passenger rail: What does the nation want or need from this mode of transportation? Who should pay for it? How should it be paid for? And if changes to the current system are necessary, how can we make those changes while minimizing unintended consequences and maximizing public benefits? We stand ready to assist the Congress as it deliberates answers to those questions. This concludes my prepared remarks. I would be pleased to answer any questions you or other Members of the Subcommittee might have. For further information, please contact JayEtta Z. Hecker at [email protected] or at (202) 512-2834. Individuals making key contributions to this statement include Colin Fallon, Richard Jorgenson, and Steve Martin. Intercity passenger rail has the potential to generate benefits to society (called “public benefits”) by complementing other more heavily used modes of transportation in those markets in which rail transport can be competitive. These benefits include reduced highway and air congestion, pollution, and energy dependence, and provide an option for travelers to use passenger rail systems in the future. One potential public benefit of intercity passenger rail service is the reduced highway congestion that will result if some people travel by train rather than on highways. Where congestion exists, intercity passenger rail would not have to capture a large share of the travelers who would otherwise use other modes of transportation in order to generate a substantial public benefit from reduced highway congestion. Roadway congestion often results when vehicles access a roadway that is already at or near capacity. The additional users have a disproportionate, detrimental effect on the flow of traffic. As a result, diverting a small group of highway users to rail transport could reduce congestion and have a substantial public benefit. The specific markets where intercity passenger rail has the most potential to generate public benefits by reducing highway congestion are regions where the highway systems are consistently operating beyond capacity and are characterized by slow moving traffic. (See fig. 3.) Therefore, rail service likely to alleviate the most highway congestion would parallel congested corridors that link cities with significant intercity transportation demand and urban congestion, such as in the Northeast. However, realizing these benefits might be difficult because the prices people pay to drive do not reflect the true costs of driving (and some costs due to pollution and congestion are borne by others) and Americans have a strong attachment to cars as their principal means of transportation. Intercity passenger rail could also potentially ease air travel congestion. This is contingent on intercity passenger rail being able to capture enough market share to reduce the number of flights between cities through frequent, competitively priced, and attractive service. For rail transport to capture the market share necessary to reduce air travel congestion, the distance between potential intercity passenger rail cities must be short enough to make rail travel times competitive with air travel. Amtrak’s market share decreases rapidly as travel time and distance increases. For example, as we reported last year, Amtrak’s market share compared with air service between New York City and Philadelphia, Pennsylvania, and Philadelphia and Washington, D.C.—relatively short-distance markets— was over 80 percent. But, for longer distance markets, such as New York City to Chicago, Illinois, and Chicago to Washington, D.C., Amtrak’s market share compared with air service was less than 10 percent. Studies suggest that as the speed of intercity passenger rail increases, the potential benefits attributable to reductions in airport and highway delays increase, as does the potential distance over which passenger rail is able to compete with air transport. The potential for intercity passenger rail to reduce air congestion is also greater where there is little, or no, room for additional runways and where there is limited competition between airlines resulting in relatively high air fares. Intercity passenger rail may also generate potential public benefits by reducing vehicle emissions, lowering pollution, and indirectly mitigating health and environmental costs. This could happen if intercity passenger rail can provide the incentive to shift people out of their cars and onto rail. However, the magnitude of this benefit depends to a large extent on the type of technology used to power rail locomotives. Conventional electric rail systems (taking into account the emissions of electricity generating power plants) emit less carbon monoxide, hydrocarbons, and nitrous oxides per passenger-mile from burning coal, natural gas, or fuel oil than conventional diesel-powered rail. In addition, within the range that most vehicles are driven, automobile carbon monoxide and hydrocarbons emissions increase as vehicle speed decreases. Therefore, to the extent intercity passenger rail can reduce roadway congestion, these forms of pollution could be reduced by having fewer vehicles on the highway(s). The ability of intercity passenger rail to generate these benefits depends on both the level of pollution and the likelihood that travelers will choose rail service over other modes of transportation. Markets where intercity passenger rail service could be competitive with other modes in terms of price, travel time, and quality of service offer the greatest opportunity to reduce pollution. In general, intercity passenger rail can be competitive with other transportation modes in short-distance markets (such as New York City to Philadelphia). However, intercity passenger rail is less competitive in longer distance markets. The extent of emissions reduction could also vary and be small. For example, a 2002 study by the California Department of Transportation of improvements to three state-supported Amtrak intercity rail routes in California found that hydrocarbon and carbon dioxide emissions would decrease with the improvements. But, certain nitrous oxide and particulate compounds emitted from diesel-fuel burning locomotives would increase. Similarly, our 1995 analysis of the Los Angles to San Diego corridor projected that eliminating rail service between these cities would result in a net increase—albeit small—in vehicle emissions from additional automobiles, intercity buses, and aircraft. Intercity passenger rail may also generate public benefits by reducing the nation’s dependence on gasoline and fossil fuels. This result would only be achieved if intercity passenger rail would require less fuel than the amount of fuel used by other modes of transportation that travelers might use if intercity passenger rail were not available. The extent of the benefits would depend on how many fewer trips were taken on other, less fuel- efficient modes of transportation and on the technology of the locomotive(s) used. Again, the 2002 California Department of Transportation study of improvements to the three Amtrak intercity routes in California (see above) estimated, that in 2011, making the improvements and expanding service could save 13 million gallons of gasoline. Similarly, in October 2002, the Federal Railroad and Federal Highway Administrations made a preliminary finding that making various improvements that would extend high-speed rail service (up to 110 miles per hour) from Washington, D.C., to Charlotte, North Carolina, could save between 6.6 million and 10.4 million gallons of gasoline per year. Finally, intercity passenger rail may generate public benefits from providing an option demand—that is, by being an alternative to other transportation modes (such as air and automobiles) that society is willing to pay for just to retain the option to use it in the future. For some people, having the option of rail service available in case their circumstances change or they have concerns about using another transportation mode has value, even if they do not plan to currently use rail service. Similarly, intercity passenger rail may have nonuse, or existence, value. Under this concept, people receive value from intercity passenger rail from knowing that it exists, even if they do not plan to use it. Quantifying these benefits is difficult and has been known to be controversial. In September 2002, we reported on some of the potential financial issues if Amtrak were to undergo liquidation. These issues are discussed in this appendix. If Amtrak had been liquidated on December 31, 2001, secured and unsecured creditors, including the federal government and Amtrak’s employees, and stockholders would have had about $44 billion in potential claims and ownership interests against Amtrak’s estate. (See fig. 4.) The federal government would have been by far the largest secured creditor (for property and equipment) and would have had the largest ownership interest (in preferred stock)—accounting for about 80 percent (about $35.7 billion) of the total amount. The federal claims largely arise from two promissory notes issued by Amtrak and held by the federal government. The first note represents a secured interest on Amtrak’s real property (primarily Amtrak’s Northeast Corridor) and matures in about 970 years. However, in June 2001, in conjunction with Amtrak’s mortgage of a portion of Pennsylvania Station in New York City, the federal government strengthened its position in relation to this note and made the principal and interest due and payable if Amtrak files for bankruptcy and is liquidated or if Amtrak defaults under the mortgage. Based on information provided by the Federal Railroad Administration, we calculated that had Amtrak been liquidated on December 31, 2001, the federal government would have been due about $14.2 billion in principal and interest on this note. The second note is secured by a lien on Amtrak’s passenger cars and locomotives and matures on November 1, 2082. This note has successive 99-year renewal terms. If Amtrak had been liquidated on December 31, 2001, this note would have been accelerated, and about $4.4 billion in principal and interest would have become immediately due and payable. The majority of non-U.S. government lenders’ secured property claims would have been associated with passenger cars and equipment ($1.5 billion) and locomotives ($941 million). As of December 31, 2001, Amtrak’s data showed that unsecured liabilities totaled about $4.4 billion. About 70 percent ($3.2 billion) would have been for labor protection payments to terminated Amtrak employees if Amtrak had been liquidated. Materials and supplies provided by vendors ($304 million) and unpaid employees’ wages and vacation and sick pay ($278 million) were among the largest remaining obligations. The potential claims for labor protection on December 31, 2001, were about $2.9 billion less than we reported in 1998. The difference stems from changes made by the Amtrak Reform and Accountability Act of 1997. This act eliminated the statutory right to labor protection, made labor protection subject to collective bargaining, and required Amtrak to negotiate new labor protection arrangements with its employees. As a result of these changes and an October 1999 arbitration decision, labor protection was capped at 5 years (compared with 6 years under the statutory provisions), made employees with less than 2 years service ineligible for labor protection payments, and based payments on a sliding scale that provided for less payout for each year worked than did the previous system. According to Amtrak, this accounted for about $1.8 billion of the cost difference. Amtrak attributed an additional $950 million to management employees no longer being eligible for labor protection payments since they were not represented by a formal labor organization and the Amtrak Reform and Accountability Act of 1997 provided for no process to provide substitute protection for these employees. The U.S. government holds all of Amtrak’s preferred stock, and four corporations hold Amtrak’s common stock. The preferred and common stock had recorded values of about $10.9 billion and $94 million, respectively, as of December 31, 2001. In addition, preferred stock holders were entitled to an annual cumulative dividend of at least 6 percent until 1997, when Amtrak’s enabling statute was amended to eliminate the requirement that preferred stock holders were entitled to dividends. No preferred stock dividends were ever declared or paid. However, Amtrak had calculated cumulative preferred stock dividends from 1981 to 1997 to be about $6.2 billion. In a liquidation, the amount of the preferred stock holders’ interest would include all cumulative unpaid dividends. Thus, the federal government, as the sole preferred stock holder, would have had about $17 billion in ownership interest had Amtrak been liquidated on December 31, 2001. It is not likely that all secured or unsecured creditor claims or ownership interests would have been satisfied because, aside from the Northeast Corridor, Amtrak’s assets available to satisfy these claims and interests (such as equipment and materials and supplies) are old, have little value, or appear unlikely to have a value equal to the claims against them. In addition, the value of Amtrak’s most valuable asset, the Northeast Corridor, has not been tested. While the corridor has substantial value, it is subject to easements and has, according to Amtrak, at least $3.8 billion in deferred maintenance. Liquidation of Amtrak would also affect the railroad retirement and unemployment systems. Amtrak is a participant in both systems. Since the retirement system is on a modified pay-as-you-go basis, the financial health of the system largely depends on the size of the workforce, the taxes derived from this workforce, and the amount of benefits paid to retired and disabled individuals and their beneficiaries. Payroll taxes levied on employers and employees are the primary sources of the retirement system’s income. In 2001, Amtrak paid about $428 million in payroll taxes into the railroad retirement account. A loss of this contribution would have a significant financial impact on the system. The Railroad Retirement Board (Board) estimated that, if Amtrak had been liquidated on December 31, 2001, and no action had been taken to increase tier II payroll taxes beyond that already planned or to reduce benefit levels, the railroad retirement account would have started to decline in 2006 and would have been depleted by 2024. If tier II taxes had been increased immediately (that is, in 2002) to offset the expected deficit in 2024, the Board estimated that tier II tax rates would have had to increase about 8 percent in 2002 (to 22.1 percent), decrease slightly in 2003, and then level off until 2018. After 2018, the tier II rate would have increased about 7 percent again (to 24.6 percent). In all cases, the tier II tax rate would have been 1.64 percentage points higher than it would have been if Amtrak had not undergone liquidation. Similarly, Amtrak liquidation would have affected tier I tax revenues and benefit payments as the result of Amtrak employees’ retiring and beginning to collect benefit payments or Amtrak employees no longer being entitled to tier I benefits because they were no longer earning tier I service credits. Similarly, participants in the railroad unemployment system would have also been affected by an Amtrak liquidation. However, the financial effects would have been immediate, but short-term. The Board estimated that if Amtrak had been liquidated on December 31, 2001, separated Amtrak employees would have received a total of $344 million in benefit payments during fiscal years 2002 and 2003. The cash reserves of the unemployment system would have been exhausted in 2002, and a total of about $340 million would have been borrowed from the railroad retirement account, as permitted by statute, from 2002 through 2004 to make these benefit payments. The peak loan balance would have been $349 million, including interest, with all loans repaid in 2005. To pay for these benefits and repay the loans, the Board would have required that other railroads and participants in the unemployment system increase their payroll tax contributions. The Board estimated that, between 2002 and 2004, the average tax rate would have increased from about 4 percent to 12.5 percent, before decreasing to 9.6 percent in 2005. Major Management Challenges and Program Risks: Department of Transportation. GAO-03-108. Washington, D.C.: January 2003. Marine Transportation: Federal Financing and a Framework for Future Infrastructure Investment. GAO-02-1033. Washington, D.C.: September 9, 2002. Regulatory Programs: Balancing Federal and State Responsibilities for Standard Setting and Implementation. GAO-02-495. Washington, D.C.: March 20, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Budget Issues: Long-Term Fiscal Challenges. GAO-02-467T. Washington, D.C.: February 27, 2002. Commercial Aviation: A Framework for Considering Federal Financial Assistance. GAO-01-1163T. Washington, D.C.: September 20, 2001. Mass Transit: Many Management Successes at WMATA, but Capital Planning Could be Enhanced. GAO-01-744. Washington, D.C.: July 3, 2001. Executive Guide: Leading Practices in Capital Decision-Making. GAO/AIMD-99-32. Washington, D.C.: December 1998. Federal Budget: Choosing Public Investment Programs. GAO/AIMD-93- 25. Washington, D.C.: July 23, 1993. Guidelines for Rescuing Large Failing Firms and Municipalities. GAO/GGD-84-34. Washington, D.C.: March 29, 1984. Intercity Passenger Rail: Potential Financial Issues in the Event That Amtrak Undergoes Liquidation. GAO-02-871. Washington, D.C.: September 20, 2002. Financial Management: Amtrak’s Route Profitability Schedules Need Improvement. GAO-02-912R. Washington, D.C.: July 15, 2002. | The Rail Passenger Service Act of 1970 created Amtrak to provide intercity passenger rail service because existing railroads found such service unprofitable. Amtrak operates a 22,000-mile network, primarily over freight railroad tracks, providing service to 46 states and the District of Columbia. Most of Amtrak's passengers travel on the Northeast Corridor, which runs between Boston, Massachusetts, and Washington, D.C. On some portions of the Corridor, Amtrak provides high-speed rail service (up to 150 miles per hour). Since its inception, Amtrak has struggled to earn revenues and run an efficient operation. Recent years have seen Amtrak continue to struggle financially. In February 2003, Amtrak reported that it would need several billion dollars from the federal government over the next few years to sustain operations. However, some have indicated that there needs to be a fundamental reassessment of how intercity passenger rail is structured and financed. Options raise questions about whether or not Amtrak should be purely an operating company, whether competition should be introduced for providing service, and if states should assume a greater financial role in the services that are provided. Compared to current levels of federal funding, substantially higher federal investment will be required in the future to stabilize and sustain Amtrak's existing network. Amtrak will be seeking about $2 billion per year over the next several years to stabilize its system and begin addressing its deferred maintenance needs and to cover operating losses. This is about twice the federal funding Amtrak has received annually over the last 5 years. However, Amtrak's identified funding requests do not address potential future needs to enhance or expand service or develop high-speed rail corridors, which Amtrak has previously estimated at up to $70 billion over the next 20 years. According to Amtrak, this will require additional federal and state investment--over and above the $2 billion annually in identified needs. Based on analyses of federal investment approaches across a broad stratum of national activities, we have identified several key components of a framework for evaluating federal investments. The Congress might find this framework useful as it deliberates the future of intercity passenger rail. At the outset, clearly defined goals would provide the foundation for making other decisions. For example, if reducing air and highway congestion were a goal, this may only be achievable in limited markets, because Amtrak's market share decreases rapidly as travel time and distance increase. To improve the focus on outcomes, it will be important for Congress to consider a systemwide approach, as opposed to a focus on one mode or type of travel. Establishing the roles of governmental and private entities could better ensure that goals are achieved. Finally, the choice and design of financing mechanisms will also have important consequences for performance as well as transparency and accountability. |
Although the sanctions curbed the Iraq regime’s ability to advance its military and weapons of mass destruction programs, the UN established a weak control environment for the Oil for Food program at its beginning due to compromises it made with the Iraq government and neighboring states. For example, the UN allowed Iraq to control contract negotiations for imported commodities with little oversight, allowing the regime to obtain illicit funds through contract surcharges and kickbacks. Several countries in the region depended on Iraqi trade, but no provisions were made to address the economic impact of the sanctions on these countries. This undermined international support for sanctions and allowed Iraq to smuggle oil outside the Oil for Food program. The sanctions helped prevent the Iraq regime from obtaining prohibited military and dual-use items, but little attention was given to oversight of the economic activities related to the Oil for Food program, such as monitoring the price and value of Iraq’s contracts. Allowing Iraq to obtain revenues outside the Oil for Food program undermined the goals of containing the regime and using its oil revenues for UN-managed assistance to benefit the Iraqi people. When the UN first proposed the Oil for Food program in 1991, it recognized the vulnerability inherent in allowing Iraq control over the contracting process. At that time, the Secretary General proposed that the UN, an independent agent, or the Iraqi government be given the responsibility to negotiate contracts with oil purchasers and commodity suppliers. However, the Secretary General subsequently concluded that it would be highly unusual or impractical for the UN or an independent agent to trade Iraq’s oil or purchase commodities and recommended that Iraq negotiate the contracts and select the contractors. Nonetheless, he stated that the UN and Security Council must ensure that Iraq’s contracting did not circumvent the sanctions and was not fraudulent. Accordingly, the Security Council proposed that UN agents review the contracts and compliance at the oil ministry. Iraq refused these conditions. By the mid-1990s, the humanitarian conditions had worsened. The UN reported that the average Iraqi’s food intake was about 1,275 calories per day, compared with the standard requirement of 2,100 calories. In April 1995, the Security Council passed resolution 986 to permit Iraq to use its oil sales to finance humanitarian assistance. Against a backdrop of pressure to maintain sanctions while addressing emergency humanitarian needs, the UN conceded to Iraq’s demand that it retain independent control over contract negotiations. Accordingly, a May 1996 memorandum of understanding between the UN and Iraq allowed Iraq to directly tender and negotiate contracts without UN oversight and to distribute imported goods to the intended recipients. When the Oil for Food program began, the UN was responsible for confirming the equitable distribution of commodities, ensuring the effectiveness of program operations, and determining Iraq’s humanitarian needs. According to the memorandum of understanding, the Iraqi government was to provide UN observers with full cooperation and access to distribution activities. However, observers faced intimidation and restrictions from Iraqi regime officials in carrying out their duties. According to a former UN official, observers could not conduct random spot checks and had to rely on distribution information provided by ministry officials, who then steered them to specific locations. The Independent Inquiry Committee reported that observers were required to have government escorts and cited various instances of intimidation and interference by Iraqi officials. The committee concluded that the limits placed on the observers’ ability to ask questions and gather information affected the UN Secretariat’s ability to provide complete field reports to the sanctions committee. Under Security Council resolutions, all member states had the responsibility for enforcing sanctions. For Iraq, the UN depended on neighboring countries to deter the importation of illicit commodities and smuggling. However, concessions to regional trade activity affected the sanctions environment and allowed the Iraqi regime to obtain revenues outside the Oil for Food program. Although oil sales outside the program were prohibited, the Security Council’s Iraq sanctions committee did not address pre-existing trade between Iraq and other member states, and no provisions were made for countries that relied heavily on trade with Iraq. Illicit oil sales were primarily conducted on the basis of formal trade agreements. For example, trade agreements with Iraq allowed Jordan—a U.S. ally dependent on Iraqi trade—to purchase heavily discounted oil in exchange for up to $300 million in Jordanian goods. Members of the sanctions committee, including the United States, took note of Iraq’s illicit oil sales to its neighbors, but took no direct action to halt the sales or take steps against the states or entities engaged in them. In addition, successive U.S. administrations issued annual waivers to Congress exempting Turkey and Jordan from unilateral U.S. sanctions for violating the UN sanctions against Iraq. According to U.S. government officials and oil industry experts, Iraq smuggled oil through several routes. Oil entered Syria by pipeline, crossed the borders of Jordan and Turkey by truck, and was smuggled through the Persian Gulf by ship. Syria received up to 200,000 barrels of Iraqi oil a day in violation of the sanctions. Oil smuggling also occurred through Iran. The Security Council authorized the Multinational Interception Force in the Persian Gulf, but, according to the Department of Defense, it interdicted only about 25 percent of the oil smuggled through the Gulf. The UN’s focus on screening military and dual-use items was largely effective in constraining Iraq’s ability to import these goods through the Oil for Food program. Each member of the Security Council’s Iraq sanctions committee had authority to approve, hold, or block any contract for goods exported to Iraq. The United States, as a member of the committee, devoted resources to conducting a review of each commodity contract. As a result, the United States was the Security Council member that most frequently placed holds on proposed sales to Iraq; as of May 2002, it was responsible for about 90 percent of the holds placed by the Security Council. U.S. technical experts assessed each item in a contract to determine its potential military application and whether the item was appropriate for the intended end user. These experts also examined the end user’s track record with such commodities. An estimated 60 U.S. government personnel within the Departments of State, Defense, Energy, and other agencies examined all proposed sales of items that could be used to assist the Iraqi military or develop weapons of mass destruction. In addition, the Department of the Treasury was responsible for issuing U.S. export licenses to Iraq. It compiled the results of the review by U.S. agencies under the UN approval process and obtained input from the Department of Commerce on whether a contract included any items found on a list of goods prohibited for export to Iraq for reasons of national security or nuclear, chemical, and biological weapons proliferation. In addition to screening items imported by Iraq, the UN conducted weapons inspections inside Iraq until 1998, when international inspectors were forced to withdraw. Sanctions also may have constrained Iraq’s purchases of conventional weapons, as we reported in 2002. In 2004, the Iraq Survey Group reported that sanctions had curbed Iraq’s ability to import weapons and finance its military, intelligence, and security forces. The UN’s neglect of Iraq’s illicit revenue streams from smuggling and kickbacks facilitated unauthorized revenue for a sanctioned regime and undermined the program’s goal of using Iraqi oil revenues to benefit the Iraqi people. According to a report by Department of Defense contract experts, in a typical contract pricing environment, fair and reasonable commodity prices are generally based on prevailing world market conditions or competitive bids among multiple suppliers. Ensuring a fair and reasonable price for goods can mitigate the possibility of overpricing and kickbacks. The Security Council’s Iraq sanctions committee and the Secretariat’s Office of the Iraq Program (OIP) were responsible for reviewing commodity contracts under the Oil for Food program, but neither entity conducted sufficient reviews of commodity pricing and value. As a result, Iraq was able to levy illicit contract commissions and kickbacks ranging from about $1.5 billion to about $3.5 billion. The UN did not adequately address other key internal control elements as it implemented the Oil for Food program: (1) establishing clear authorities, (2) identifying and addressing program risks, and (3) ensuring adequate monitoring and oversight. UN entities and contractors responsible for implementing and monitoring the program lacked clear lines of authority. For example, the Office of the Iraq Program lacked clear authority to reject commodity contracts based on pricing concerns. In addition, the UN contractor at Iraq’s border did not have the authority to evaluate imports for price and quality, and no provisions were made to stop imports that were not purchased through the Oil for Food program. Moreover, the UN did not assess emerging risks as the Oil for Food program expanded from a 6-month emergency measure to deliver food and medicine to a 6-year program that provided more than $31 billion to 24 economic sectors. Some monitoring activities constrained the ability of the regime to obtain illicit contract surcharges, but smuggling continued despite the presence of inspectors. Finally, the UN’s internal audit office examined some aspects of the Oil for Food program and identified hundreds of weaknesses and irregularities. However, it lacked the resources and independence to provide effective oversight of this ambitious and complex UN effort. A good internal control environment requires that the agency clearly define and delegate key areas of authority and responsibility. Both OIP, as an office in the UN Secretariat, and the Security Council’s Iraq sanctions committee were responsible for the management and oversight of the Iraq sanctions and Oil for Food program. The Iraq government, other UN agencies, UN member states, the interdiction force in the Persian Gulf, inspection contractors, and internal and external audit offices also played specific roles (see figure 1). However, no single entity was accountable for the program in its entirety. In 2005, the Independent Inquiry Committee reported that the Security Council had failed to clearly define the program’s broad parameters, policies, and administrative responsibilities and that neither the Security Council nor the Secretariat had control over the entire program. The absence of clear lines of authority and responsibility were important structural weaknesses that further undermined the management and oversight of the Oil for Food program. For example, OIP was to examine each commodity contract for price and value before submitting it to the sanctions committee for approval. However, the Independent Inquiry Committee found that OIP lacked clear authority to reject contracts on pricing grounds and did not hire customs experts with the requisite expertise to conduct thorough pricing evaluations. In addition, UN inspectors did not have the authority to inspect goods imported into Iraq to verify price and quality. These inspectors mostly verified the arrival of goods in the country for the purpose of paying the contractor. The Secretariat’s contract for inspecting imports at three entry points in Iraq required inspection agents to “authenticate” goods, but the agents’ responsibilities fell short of a more rigorous review of the imports’ price and quality. Under the Oil for Food program, inspection agents compared appropriate documentation, including UN approval letters, with the commodities arriving in Iraq; visually inspected about 7 to 10 percent of the goods; and tested food items to ensure that they were “fit for human consumption.” However, inspection agents were not required to (1) verify that food items were of the quality contracted, (2) assess the value of goods shipped, (3) inspect goods that were not voluntarily presented by transporters, or (4) select the items and suppliers or negotiate contracts. In addition, no provisions were made to interdict prohibited goods arriving at the border. According to Cotecna, the inspections contractor from 1999 to 2004, “authentication” is not a standard customs term or function. The UN created the term for the Oil for Food program and did not include traditional customs inspection activities, such as price verification and quality inspection. In anticipation of an oil for food program, the UN selected Cotecna in 1992 for a program that was never implemented. Under that proposal, Cotecna would have verified fair pricing and inspected the quality of the items to help ensure that they conformed to contract requirements. Finally, limited authority for contractors overseeing oil exports facilitated Iraq’s ability to obtain illicit revenues from smuggling that ranged from $5.7 billion to $8.4 billion over the course of the Oil for Food program. In 1996, the Secretariat contracted with Saybolt to oversee the export of oil from Iraq through selected export points. The inspectors were to monitor the amount of oil leaving Iraq under the Oil for Food program at these locations and to stop shipments if they found irregularities. The inspectors worked at two locations—the Ceyhan-Zakho pipeline between Iraq and Turkey and the Mina al-Bakr loading platform in southern Iraq. In 2005, a Saybolt official testified that its mandate did not include monitoring all oil exports leaving Iraq from other locations or acting as a police force. As a result, the contractors did not monitor oil that was exported outside the Oil for Food program. Risk assessments can identify and manage the internal and external challenges affecting a program’s outcomes and accountability, including those risks that emerge as conditions change. The Oil for Food program expanded rapidly as it evolved from an emergency 6-month measure to provide humanitarian needs to a 6-year program that delivered about $31 billion in commodities and services in 24 sectors. Beginning in 1998, when the international community was not satisfied with Iraq’s compliance with weapons inspections, the Security Council continued the sanctions and expanded its initial emphasis on food and medicines to include infrastructure rehabilitation and activities in 14 sectors. These sectors included food, food handling, health, nutrition, electricity, agriculture and irrigation, education, transport and telecommunications, water and sanitation, housing, settlement rehabilitation for internally displaced persons, demining, a special allocation for vulnerable groups, and oil industry spare parts and equipment. In June 2002, the Iraqi government introduced another 10 sectors, including construction, industry, labor and social affairs, youth and sports, information, culture, religious affairs, justice, finance, and the Central Bank of Iraq. The Security Council and UN Secretariat did not assess the risks posed by this expansion, particularly in light of the fact that they had allowed the Iraqi government to tender and negotiate its contracts. The UN Office of Internal Oversight Services (OIOS) was the only entity that attempted to assess the enormous risks in the Oil for Food program, but OIP blocked that attempt. In August 2000, the Under Secretary General for OIOS proposed an overall risk assessment to the Deputy Secretary General to improve the program by identifying the factors that could prevent management from fulfilling the program’s objectives. The proposal noted that this assessment could be a model for other UN departments and activities. OIOS considered the Oil for Food program a high-risk activity and decided to focus on an assessment of OIP’s Program Management Division. This unit was responsible for providing policy and management advice to OIP’s executive director and for supporting OIP’s field implementation and observation duties. In May 2001, OIP’s executive director refused to fund the risk assessment, citing financial reasons and uncertainty over the program’s future. In July 2003, OIOS issued an assessment of OIP’s Program Analysis, Monitoring, and Support Division—formerly the Program Management Division—that identified a number of organizational, management, and administrative problems, including poor communication and coordination, unclear reporting lines among OIP headquarters units and the field, and the lack of approved work plans. However, by this date, the UN was preparing for the November 2003 transfer of the program to the Coalition Provisional Authority in Iraq, and the report was of limited usefulness for addressing high-risk areas. Comprehensive and timely risk assessments might have identified the internal control weaknesses—such as inadequate contract pricing reviews—that facilitated Iraq’s ability to levy illicit contract revenues. These assessments also might have identified the structural management weaknesses that led to ineffective communication and coordination within the program. Ongoing monitoring and specific control activities should meet the management and oversight needs of the agency or program. However, during the Oil for Food program, the lack of functioning oil meters enabled the Iraqi government to smuggle oil undetected by inspectors. A Saybolt employee testified that the company notified UN officials of the problems posed by the lack of functioning meters at the beginning of the program. He also testified that the lack of metering equipment allowed the two “topping off” incidents involving the oil tanker Essex, in which the tanker loaded additional oil after the inspectors had certified the loading and left the vessel. In November 2001, a Saybolt representative noted that Iraq’s distribution plans for that period provided for the installation of a meter at the Mina al-Bakr port. A U.S. official called for OIP to develop a plan to prevent unauthorized oil sales that would include installing a meter at the port. However, Iraq did not tender a contract for the meter. As of March 2006, the Iraqi government has not yet installed oil meters at Mina al-Bakr. In addition, the sanctions committee relied on the advice of independent oil overseers to approve oil sales contracts. The overseers reviewed Iraq’s oil sales contracts to determine compliance with program requirements and whether the prices that Iraq negotiated for its oil were fair and reflected market pricing. However, the inadequate number of overseers monitoring Iraq’s oil pricing over a 14-month period may have been a factor in Iraq’s ability to levy illicit surcharges on oil contracts. From June 1999 to August 2000, only one oil overseer was responsible for monitoring billions in Iraq’s oil transactions, contrary to the sanctions committee’s requirements for at least four overseers. Four overseers were hired at the beginning of the program but three resigned by June 1999. Political disputes among sanctions committee members prevented the committee from agreeing on replacements. According to the Independent Inquiry Committee, the sanctions committee demonstrated weak program oversight in its inability to fill the vacant positions. In contrast, in October 2001, the Security Council’s sanctions committee imposed a positive control activity—retroactive oil pricing—to prevent Iraqi officials from adding illegal oil surcharges to contracts. In November 2000, UN oil overseers reported that Iraq’s oil prices were low and did not reflect the fair market value. The overseers also reported in December 2000 that Iraq had asked oil purchasers to pay surcharges. In early 2001, the United States informed the sanctions committee about its concerns regarding allegations that Iraqi government officials were receiving illegal surcharges on oil contracts. The United States delayed oil pricing until after the Iraq government signed contracts with oil purchasers but without knowing the price it would have to pay until delivery. Setting the price at the time the oil was delivered helped to ensure a fair market price. This practice, known as retroactive pricing, curbed the ability of the Iraqi government to levy illicit surcharges on its oil sales contracts. Prior to retroactive pricing, estimates of Iraq’s illicit revenues from surcharges on exported oil ranged from about $230 million to almost $900 million. Ongoing monitoring of internal control should include activities to help ensure that the findings of audits and other evaluations are promptly resolved. Although OIOS conducted dozens of audits of the Oil for Food program, the office did not review key aspects of the Oil for Food program and had insufficient staff. OIOS did not review whether OIP was adequately monitoring and coordinating the Oil for Food program, including OIP’s role in assessing commodity pricing. OIOS did not examine OIP’s oversight of the commodity contracts for central and southern Iraq, which accounted for 59 percent of Oil for Food proceeds. According to the Independent Inquiry Committee, the internal auditors believed that they did not have the authority to audit humanitarian contracts because the sanctions committee was responsible for contract approval. OIP management mostly supported OIOS audits for program activities in northern Iraq managed by other UN agencies; however, these northern programs constituted only 13 percent of the Oil for Food program. Because OIOS did not review commodity contracts, it was difficult to quantify the extent to which the Iraqi people received the humanitarian assistance funded by its government’s oil sales. The Independent Inquiry Commission noted that the practice of allowing the heads of programs the right to fund internal audit activities led to excluding high-risk areas from internal audit examination. We also found that UN funding arrangements constrain OIOS’s ability to operate independently as mandated by the General Assembly and as required by the international auditing standards to which OIOS subscribes. The UN must support budgetary independence for the internal auditors. In addition, the number of OIOS staff assigned to the Oil for Food program was low. OIOS had only 2 to 6 auditors assigned to cover the Oil for Food program. The UN Board of Auditors indicated that the UN needed 12 auditors for every $1 billion in expenditures. The Independent Inquiry Committee concluded that the Oil for Food program should have had more than 160 auditors at its height in 2000. However, the committee found no instances in which OIOS communicated broad concerns about insufficient staff to UN management. OIOS also encountered problems in its efforts to widen the distribution of its reporting beyond the head of the agency audited. In August 2000, OIOS proposed sending its reports to the Security Council. However, the OIP director opposed this proposal, stating that it would compromise the division of responsibility between internal and external audit. In addition, the UN Deputy Secretary General denied the request, and OIOS subsequently abandoned any efforts to report directly to the Security Council. Timely reporting on audit findings would have assisted the Security Council in its oversight of Iraq sanctions and the Oil for Food program. Our findings on UN management of Iraq sanctions and the Oil for Food program reveal a number of lessons that can apply to future sanctions and should be considered during the ongoing debate on UN reform. These lessons demonstrate the importance of establishing a good control environment at the outset. In addition, fundamental internal control activities must be applied throughout the life of UN programs. Specifically, When establishing the program, assess the roles and authorities of the sanctioned country. If political pressures and emergency conditions dictate significant authority and responsibilities for the sanctioned country, assess the risks posed by these authorities and take steps to mitigate potential problems. A comprehensive risk assessment following the decision to allow Iraqi control over contracting and monitoring might have revealed the need for more rigorous activities to review the prices the regime charged and the quality of goods it contracted to prevent or help lessen the opportunity for illicit charges. Consider the impact that the loss of trade might have on surrounding countries. For example, Jordan, a U.S. ally, was allowed to continue buying Iraqi oil outside the Oil for Food program, which facilitated the revenue that Iraq could obtain beyond UN control. Other provisions for obtaining discounted oil might have prevented this trade. Ensure that monitoring and oversight equally address all program goals. Although the UN focus on screening military and dual-use items was largely effective in constraining Iraq’s ability to import these goods through the Oil for Food program, the UN’s neglect of Iraq’s illicit revenue streams from smuggling and kickbacks undermined the program’s goal of using Iraqi oil revenues to benefit the Iraqi people. Establish clear authorities for key management, oversight, and monitoring activities. The Oil for Food program had unclear lines of authority for rejecting contracts based on price and value concerns and for inspecting imported goods and exported oil. These important structural weaknesses allowed the sanctioned Iraq regime significant control over program activities. As programs and funding expand, continuously assess the risks caused by this expansion and take steps to ensure that resources are safeguarded. The UN did not assess risks as the Oil for Food program grew in size and complexity, particularly in light of the fact that it had relegated responsibility for the contracting process to Iraq. Timely risk assessments might have identified the internal control weaknesses that facilitated Iraq’s ability to levy illicit contract revenues and thereby undermine the UN’s goal of using Iraq’s oil proceeds for humanitarian assistance to the Iraqi people. Assess the role of internal audit and evaluation units and take steps to ensure that these entities have the resources and independence needed for effective oversight. Although the UN’s internal audit office audited some aspects of the Oil for Food program and identified hundreds of irregularities, it lacked the resources and independence to provide effective oversight of this costly and complex UN effort. In our report on the Oil for Food program’s internal controls, we recommend that the Secretary of State and the Permanent Representative of the United States to the UN work with other member states to encourage the Secretary General to ensure that UN programs with considerable financial risks establish, apply, and enforce the principles of internationally accepted internal control standards, with particular attention to comprehensive and timely risk assessments; and strengthen internal controls throughout the UN system, based in part on the lessons learned from the Oil for Food program. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Joseph Christoff at (202) 512-8979. Other key contributors to this statement were Lynn Cothern, Jeanette Espinola, Tetsuo Miyabara, Valérie Nowak, and Audrey Solis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 1996, the United Nations (UN) and Iraq began the Oil for Food program after sanctions were imposed in 1990. The program was intended to allow the Iraqi government to sell oil to pay for humanitarian goods and prevent it from obtaining goods for military purposes. More than $67 billion in oil revenue was obtained through the program, with $31 billion in assistance delivered to Iraq. Internal controls serve as the first line of defense in preventing fraud, waste, and abuse and in helping agencies achieve desired outcomes. GAO assesses (1) the control environment the UN established for managing the sanctions and Oil for Food program and (2) other key internal control elements. In addition, we provide observations on the lessons learned from the program. The UN--the Security Council, the Secretariat, and member states--established a weak control environment for the Oil for Food program at the beginning. The UN allowed Iraq to control contract negotiations for imported commodities with little oversight, enabling the regime to obtain illicit funds through surcharges and kickbacks. The UN did not take steps to address the economic impact that the sanctions had on countries that depended on Iraqi trade, which undermined international support for sanctions and allowed Iraq to smuggle oil outside the Oil for Food program. Overall, the sanctions were effective in helping to prevent the Iraq regime from obtaining military items, but the UN was less rigorous in overseeing economic activities such as monitoring the price and value of Iraq's contracts. The UN's neglect of Iraq's illicit revenue streams helped support a sanctioned regime and undermined the goals of using oil revenues to benefit the Iraqi people. The UN did not adequately address key internal control elements as it implemented the Oil for Food program. First, UN entities lacked clear lines of authority. For example, the Office of the Iraq Program lacked clear authority for rejecting commodity contracts based on pricing concerns. In addition, the customs contractor at Iraq's border was not authorized to evaluate imports for price and quality. Second, the UN did not assess emerging risks as the Oil for Food program expanded from a 6-month emergency measure to deliver food and medicine to a 6-year program providing more than $31 billion to 24 economic sectors. Third, some monitoring activities constrained Iraq's ability to obtain illicit oil surcharges, but smuggling continued despite the presence of inspectors. In addition, the UN's internal audit office identified hundreds of weaknesses and irregularities in its reports. However, it lacked the resources and independence to provide effective oversight of this costly and complex UN effort. The Oil for Food program offers several lessons for designing future sanctions and strengthening existing UN programs: assess whether the sanctions program gives undue control to the sanctioned country; consider the economic impact that sanctions have on neighboring countries; ensure that all aspects of sanctions are equally enforced; establish clear authority and responsibility for management, oversight, and monitoring activities; assess and mitigate risk as programs and funding expand; and assess the role of internal oversight units and ensure that they have the resources and independence needed for effective oversight. |
Since the 1960’s, the federal government has provided resources to support the education of students with limited English proficiency. Federal funding has supported school districts, colleges and universities, and research centers to assist students in attaining English proficiency and in meeting academic standards. In addition to federal funding, state and local agencies provide significant funding to support the education of these students. The evolving educational standards movement and NCLBA have reshaped how the federal government views and supports programs for elementary and secondary school students whose native language is not English. Prior to Title III of NCLBA, federal funding provided under Title VII of the Improving America’s Schools Act supported services for students with limited English proficiency. Both Title III and Title VII were designed to target students with limited English proficiency, including immigrant children and youth, supporting these students in attaining English proficiency and meeting the same academic content standards all students are expected to meet. However, Title III differs from Title VII in terms of funding methods and requirements for academic standards and English language proficiency standards and assessments. In particular, Title III provides for formula-based grants whereas Title VII provided funds primarily through discretionary grants. Title III also requires states to have English language proficiency standards that are aligned with the state academic content standards, in addition to annually assessing the English language proficiency of students having limited English proficiency. GAO reported on the academic achievement of these students and the validity and reliability of assessments used to measure their performance. We recommended that Education undertake a variety of activities to help states better measure the progress of these students under NCLBA. Title VII authorized various discretionary grants to eligible states, school districts, institutions of higher education, or community-based organizations to, among other things, assist with the development of instructional programs for students with limited English proficiency. Under Title VII, colleges and universities also could apply for grants to provide professional development programs on instructional and assessment methodologies and strategies as well as resources specific to limited English proficient students for teachers and other staff providing services to these students. Title VII also required that funds be set aside for the establishment and operation of a national clearinghouse for information on programs for students with limited English proficiency. In addition, Title VII offered a formula grant program to support enhanced instructional opportunities in school districts that experienced unexpectedly large increases in their immigrant student population. States with districts that had large numbers or percentages of immigrant students were eligible to receive funds under this program. In contrast to Title VII, Title III of NCLBA requires Education to allocate funds to all 50 states, the District of Columbia, and Puerto Rico based on a formula incorporating the population of children with limited English proficiency and the population of immigrant children and youth in each state (relative to national counts of these populations). Specifically, funds are to be distributed to states as follows 80 percent based on the population of children with limited English 20 percent based on the population of recently immigrated children and youth (relative to national counts of these populations). NCLBA provides that Education is to determine the number of children with limited English proficiency and immigrant children and youth using the more accurate of two data sources: the number of students with limited English proficiency who are assessed under NCLBA for English proficiency, or data from ACS, which is based on responses to a series of relevant questions. Education allocates these funds after making certain reservations. For example, each fiscal year Education must reserve 0.5 percent or $5 million, whichever is greater, for providing grants to schools and other eligible entities that support language instruction educational projects for Native American children (including Alaska Native children) with limited English proficiency. Also, a reservation of 6.5 percent is made to support activities including the National Clearinghouse for English Language Acquisition and Language Instruction Educational Programs and to provide grants for professional development to improve educational services for children with limited English proficiency. Institutions of higher education in consortia with school districts or state educational agencies may apply for these discretionary grants. Once states receive Title III funds from Education, they are allowed to set aside up to 5 percent of these funds for certain state-level activities, including administration. In addition, Title III requires each state to use up to 15 percent of its formula grant to award subgrants to its school districts with significant increases in school enrollment of immigrant children and youth, before distributing the remainder across school districts in proportion to the number of students with limited English proficiency. School districts are required to use Title III funds to provide scientifically based language instruction programs for students with limited English proficiency and to provide professional development to teachers or other personnel. School districts may also use Title III funds for other purposes, including to develop and implement language instruction programs for such students; to upgrade program objectives and instruction strategies, curricula, educational software, and assessment procedures for such students; to provide tutorials or intensified instruction for these students; to provide community participation programs, family literacy services, and parent outreach for these students and their families; to acquire educational technology or instructional materials; and to provide access to electronic networks for materials, training, and communication. School districts that receive funds because they have experienced substantial increases in immigrant children and youth are to use these funds for activities that provide enhanced instructional opportunities for these students. Such activities may include family literacy programs designed to assist parents in becoming active participants in the education of their children; services such as tutoring, mentoring, and academic or career counseling for these students; support for teacher aides trained specifically for working with these students; the acquisition of instructional materials or software; and programs designed to introduce these students to the educational system. An Office of Management and Budget-sponsored interagency committee, including Education, exists to determine questions to be included on the ACS and decennial census. Education’s National Center for Education Statistics represented the department in the determination of the questions used by Census. The current language questions were developed for the 1980 census to obtain information needed about current language use and limited English language proficiency as a result of legislation such as the Civil Rights Act of 1964, the Bilingual Education Act, and the Voting Rights Act. These questions remain in their original form and have not been modified since the passage of NCLBA. The other data source specified by NCLBA as a potential basis for the distribution of Title III funding—the number of students with limited English proficiency who are assessed annually for proficiency in English— would generally come from the states. States report the number of students assessed to Education in their Consolidated State Performance Reports. States are to report the number of these students served by Title III who are assessed annually for proficiency in English in the state Biennial Evaluation Reports to Education. Education has responsibility for general oversight under Title III of NCLBA, including providing guidance and technical assistance, monitoring, and reporting information to Congress on students with limited English proficiency based on data collected in the Consolidated State Performance Reports and Biennial Evaluation Reports. Education reviews state plans, which all states have submitted. These plans, as required by Title III, outline the process that the state will use in making subgrants to eligible entities and provide evidence that districts conduct annual assessments for English proficiency that meet the law’s requirements, along with other information. By June 2003, Education had reviewed and approved all state plans; Education has since reviewed and approved many plan amendments submitted by states. Education used ACS data to distribute Title III funds across states although measurement issues with ACS and state-reported data could affect the amount of funding that each state receives. Education has not developed a methodology to determine the more accurate of the allowable data once state data are complete. The two data sources differ in what they measure and how that measurement occurs. These differences between the data sources have implications for funding levels—some states could receive more funding while others could receive less depending on which data source Education uses. Education based the distribution of Title III funding across states on Census’ ACS data for fiscal years 2005 and 2006. In both years, Education used these data to determine the number of children and youth with limited English proficiency as well as the number of children and youth who were recent immigrants. Prior to fiscal year 2005, Education used Census 2000 data for the number of children and youth with limited English proficiency and relied on state-reported data for the number of recent immigrants. Education officials determined that the ACS data were more accurate than state data—primarily because the state data provided in the Consolidated State Performance Reports on the number of students with limited English proficiency who were assessed for English proficiency across three dimensions (reading, writing, and oral) were incomplete. Education officials explained that not all states provided these data for school year 2004-05, and some provided data that included only partial counts of students. For example according to Education, some states, such as California and Texas, did not assess all students with limited English proficiency. Education officials told us that the lack of complete state data was, in part, due to the time needed to establish academic standards and align English language proficiency assessments to those standards and collect the related data. Education officials also explained that some states provided inconsistent data on the number of students with limited English proficiency who were assessed for English proficiency in the Consolidated State Performance Reports because instructions for providing this information did not include definitions of the data to be collected. Similarly, we found that these instructions could be interpreted to ask for different data elements. For example, it was unclear whether states should provide the number of students screened for English proficiency, the number of students who were already identified as limited English proficient who were then assessed for their proficiency or a combination of the two numbers. Further it was not clear whether or not states were to provide an unduplicated count—as some states use more than one assessment to evaluate a student’s mastery of the various dimensions of English proficiency (reading, writing, and oral). Such students may be reported more than once. As a result, some states included duplicate counts of students, and in other states, these data included other student counts (based on screening of new students rather than assessments of already identified students as specified in the law). In September 2006, Education officials told us that they plan to modify the instructions for providing these data in the Consolidated State Performance Report for school year 2006-07 data that is to be submitted in December of 2007. However, the officials did not have a copy of a plan or proposed modifications. During the time of our engagement, Education was in the process of reviewing state data and providing feedback to the states based on both school year 2003-04 and 2004-05 Consolidated State Performance Report data. Education performed this effort in part to improve the quality of state data entered into Education’s national data system. This effort included comparing recent data to data provided in previous years and incorporating data edits and checks to guide state officials as they entered relevant data electronically. Education officials told us that they expect this review along with feedback to the states to result in improved data for school year 2005-06 and beyond. They also told us that they believe their efforts to address state data quality, including clarifying Consolidated State Performance Report instructions and reviewing state-provided data, will result in improved information on the number of students with limited English proficiency who were assessed for English proficiency. While Education officials expected that their efforts would improve the quality of the data, they told us that they had not established criteria or a methodology to determine the relative accuracy of the two data sources. Education officials stated that as the state data improve and become complete, complex analysis will be needed to determine the relative accuracy of these data and the ACS data. The two allowable sources of data measure fundamentally different populations. The state data specified in NCLBA are to represent those students with limited English proficiency who are assessed annually for proficiency. In contrast, the ACS data that Education uses to represent students with limited English proficiency are based on self-reported survey responses to particular questions of a sample of the population. Table 1 compares different characteristics of these data, including what they measure and how. NCLBA requires that all students with limited English proficiency are assessed annually for proficiency in English. However, states have different methods of identifying which students have limited English proficiency (see fig. 1). These varied methods, along with any differences in interpreting student performance on such screenings, could result in a lack of uniformity in the population identified as having limited English proficiency. States generally employ home language surveys— questionnaires asking what languages are spoken at home—to determine which students should be screened for English proficiency. However, beyond the home language survey, methodologies for determining a student’s English proficiency vary. States use different screening instruments, and even within a state, there could be variation in the instruments used. In addition, some states and school districts may implement other methods—such as subjective teacher observation reports—in determining a student’s language proficiency. Regardless of how states determine which students have limited English proficiency and need language services, they are required to offer services and assess the progress of all such students. The ACS data used by Education to represent the number of students with limited English proficiency are based on a sample of the population. In particular, these data represent the number of persons ages 5 to 21 who speak a language other than English in the home and who report speaking English less than “very well” (see fig. 2). The responses to the question regarding how well members of the respondent’s household speak English are subjective. The Census Bureau has found some inconsistency with these responses in its re-interview process, which is a data quality check. Is a language other than English spoken in the home? It is not known how accurately the ACS data reflect the population of students with limited English proficiency. According to Census officials, no research exists on the linkage between the responses to the ACS English ability questions and the identification of students with limited English proficiency. Because ACS data are used as the basis of Title III- funding distribution, it is critical to understand how accurately these data represent the population and whether they do so uniformly across states. In addition, ACS data for 2003 and 2004 show some large fluctuations in the number of respondents who speak English less than very well. In part, these fluctuations can be attributed to the partial implementation of the ACS in these 2 years. The full implementation of the ACS occurred in 2005, and the data on English ability were not yet available at the time of our review. Our analysis of the 2003 and 2004 ACS data that Education used as the basis of Title III funding showed that 13 states had increases of 10 percent or more in this population, while 20 states and the District of Columbia had decreases of 10 percent or more from the prior year. Further, seven of the states that showed decreases of 10 percent or more in the ACS 2003-04 data representing students with limited English proficiency also showed an increase in the number of recent immigrants for this period. Many of these immigrants were likely to have limited English proficiency. For example, according to ACS data that Education uses to represent students with limited English proficiency, Rhode Island had a decrease of 33.5 percent in this population at the same time that it had an increase (about 33 percent) in the number of recent immigrants (age 3 to 21). Education used the most current ACS data available to distribute Title III funding across the states, consequently the fluctuations in the ACS data were reflected in fluctuations in funding. In so far as these data reflect population changes, such fluctuations are to be expected. However, if the fluctuations were due to errors resulting from the sample size for the 2003 and 2004 ACS data, then they may have resulted in some states receiving a greater (or lesser) proportion of the funds than their population of students with limited English proficiency and recently immigrated children and youth would warrant. Table 3 shows Education’s distribution of Title III funds across states for fiscal years 2005 and 2006. In our 12 study states, we found differences between the state-reported number of students identified as having limited English proficiency and the ACS data that Education uses to represent this population of students (see fig. 3). In 6 states, the 2004 ACS number was greater than the state’s count (for school year 2004-05), while in the other 6 states the ACS number was less than the corresponding state count. For example, while California reported having about 1.6 million students with limited English proficiency in the 2004-05 school year, ACS estimates of the population of persons ages 5 to 21 who speak a language other than English in the home and speak English less than “very well” was less than 1.1 million. This represents a difference of almost 50 percent. The difference in New York for that school year was also large—New York reported about 204,000 students with limited English proficiency—and the ACS number used by Education was about 332,000, a difference of almost 40 percent for the same school year (see fig. 3). Education used ACS data for the number of immigrant children and youth for fiscal years 2005 and 2006; however, for fiscal years 2002-2004, Education relied on state-reported counts of the number of immigrant children and youth. With regard to data states collect on the number of children and youth who are recent immigrants, state officials expressed a lack of confidence in these data. State officials in some of the 12 study states told us that these data were not very reliable because school and school district officials did not ask about immigration status directly. Some state and school district officials told us that in order to determine whether a student should be classified as a recent immigrant, they relied on information such as place of birth and the student’s date of entry into the school system. Officials in one state told us that in the absence of prior school documentation, they made the assumption that if a student was born outside the U.S. and entered the state’s school system within the last 3 years, then the student was a recent immigrant. See table 4 for more information about the characteristics of state-collected data and ACS data pertaining to children and youth who are recent immigrants. The ACS data on the number of children and youth who are recent immigrants represent the number of foreign-born persons ages 3 to 21 who came to the United States within the 3 years prior to the survey. Similar to the ACS data that Education used to represent students with limited English proficiency, these data are also based on self reports. However, the ACS responses are more objective (e.g., the date of entry into the United States) and therefore may be more consistent than the responses to the English ability questions. Education’s choice to use one data set over the other has implications for the amount of funding states receive because the data sources specified in NCLBA measure different populations in different ways. We simulated the distribution of funds across our 12 study states, using ACS data and data representing the number of students with limited English proficiency reported to us by state officials. We used the number of students with limited English proficiency identified by states, rather than the number of these students assessed annually for their English proficiency because state-reported data on the number of students assessed for school years 2003-04 or 2004-05 were not available for all the 12 study states. Throughout the simulation, we used ACS data representing the number of immigrant children and youth. Based on our simulation, we found that in fiscal years 2005 and 2006, 5 of the 12 study states—Arizona, California, Colorado, Nevada, and Washington—would have received more funding and the other 7 study states would have received less (see figs. 4 and 5). Federal funds for students with limited English proficiency and immigrant children and youth increased significantly from fiscal year 2001—the year prior to the enactment of the NCLBA— to fiscal year 2006. In addition to the increase in funding to the states, many more school districts received funds under the Title III formula grant program. Federal funding for students with limited English proficiency and immigrant children and youth increased significantly from fiscal year 2001 (the year prior to the enactment of NCLBA) to fiscal year 2002 when Congress first authorized Education to distribute funds to states under Title III. In fiscal year 2001 states, schools, school districts, and universities received almost all of the $446 million dollars appropriated for Title VII to educate students with limited English proficiency, including immigrant students. Congress appropriated over $650 million for this purpose in fiscal year 2002. Annual appropriations remained between $650 million and $685 million in fiscal years 2003-06 (see fig. 6). Under NCLBA, 37 states received an increase in funding to support students with limited English proficiency and immigrant children and youth in fiscal year 2006, compared to funding in fiscal year 2001 under Title VII. Education provided about 93 percent (more than $600 million) of funds to support students with limited English proficiency and immigrant children and youth to states based on the Title III formula for funding distribution in fiscal year 2006. The remainder funded other Title III programs, including professional development grants (5.4%) and Native American and Alaskan Native grants (1.2%). In fiscal year 2001, Education distributed 41.2 percent of the $432 million of Title VII funds provided to states in the form of discretionary grants to schools, school districts, and state education agencies to support the education of students with limited English proficiency, and 22.5 percent for professional development of teachers and others associated with the education of these students. Education allocated (34.4%) to states to support the education of immigrant students under the Emergency Immigrant program and the remaining 1.9 percent to state educational agencies for program administration and to provide technical assistance to school districts. (See fig. 7 for distribution of Title VII funds in total and Title III funds by program for fiscal years 2001-06.) The percentage of grant funding specified for professional development decreased from 22.5 percent under Title VII in fiscal year 2001 to about 5.4 percent under Title III in fiscal year 2006. However, Education officials told us that states and school districts are required to use a portion of the Title III formula grant funding they receive to provide professional development for teachers and other staff even though the level of funds is not specified in the law. As a result, officials believe that more funds are being spent for professional development under Title III than under Title VII. The percentage of funding provided for programs specifically for immigrant students was higher under Title VII than under Title III. Under Title VII, Education distributed about 34 percent of fiscal year 2001 funding to states based on the number of immigrant students in the state. In contrast, 20 percent of the Title III formula grant funds is distributed to states on the basis of their relative number of immigrant students. Upon receiving Title III grants, states are to reserve up to 15 percent of their formula grants to award subgrants to school districts within the state with significant increases in school enrollment of immigrant children and youth. Officials in our study states told us that the percentage of funds they reserved specifically for providing enhanced instructional opportunities for immigrant children and youth ranged from 0 to15 percent, and varied in some states from year to year. For example, one state’s officials noted that the percentage varied from 8 percent in fiscal year 2003 to none in fiscal year 2005. Officials in our study states generally explained that they distributed Title III funds reserved for this purpose to school districts with a significant increase in immigrant students over the previous 2 years. For example, another state official stated that to receive these funds, school districts must have an increase of either 3 percent or 50 students from the average of the 2 previous years, whichever is less, and must have a minimum of 10 immigrant students. The number of school districts receiving federal funding for students with limited English proficiency has increased under Title III compared to under Title VII. For example, in three of our study states (California, Texas, and Illinois) more than 1,900 school districts received funding for students with limited English proficiency under Title III in school year 2003-04 compared to about 500 school districts (including districts in which schools were awarded Title VII grants directly) receiving such funding under Title VII. Further, fewer schools in a district receiving Title VII funds may have actually benefited from these funds. For example, officials in two districts noted that under Title III all schools in the districts received some funds to support their students with limited English proficiency. In contrast, these officials told us that prior to NCLBA, Title VII discretionary grants were targeted to some schools in their districts while other schools with students with limited English proficiency received no Title VII funds. Education officials estimated that Title III funds are now being used to support 80 percent of the students with limited English proficiency in schools. States and school districts reported using Title III funds to support a variety of programs and activities for students with limited English proficiency, ranging from various types of language instruction programs to professional development. With regard to challenges in implementing effective programs, officials we interviewed in 5 study states and 8 school districts reported difficulty recruiting qualified staff. Nationwide, states and school districts reported using Title III funds to support a variety of programs and activities, including language instruction, activities to support immigrant children and youth, professional development, and technical assistance. For example, all fifty states, the District of Columbia and Puerto Rico reported that school districts receiving Title III funds implemented various types of language instruction programs, including bilingual and English as a second language (ESL) programs, according to 2002-04 state Biennial Evaluation Reports to Education. Specifically, all states, the District of Columbia, and Puerto Rico reported using ESL programs, which typically involve little or no use of the native language, such as sheltered English instruction and pull-out ESL. In addition, all but 12 states also reported using bilingual programs, which may provide instruction in two languages, such as dual language programs that are designed to serve both English-proficient and limited English proficient students concurrently (see table 5). (See app. II for more information regarding language-instruction programs that states, the District of Columbia, and Puerto Rico reported using.) Forty-six states, the District of Columbia, and Puerto Rico reported that school districts used Title III funds designated to support activities for immigrant children and youth for programs such as parent outreach, tutorials, mentoring, and identifying and acquiring instructional materials. For example, officials in one state noted that many school districts used these funds to expand activities designed for all students with limited English proficiency, while other districts used them to meet the unique needs of immigrant students not addressed through other programs, such as providing counseling for traumatized refugee students. Officials in another state noted that school districts commonly used these funds to provide newcomer centers that provided educational and other services to recent immigrants and their parents. Funds were also used to provide ESL classes before and after school for recent immigrant students as well as ESL classes, literacy classes, and computer classes for their parents. States also reported that Title III funds supported professional development activities. Specifically, all states, the District of Columbia, and Puerto Rico reported that school districts used Title III funds to conduct professional development activities for teachers or other personnel, such as workshops or seminars on the administration and interpretation of English language proficiency assessments or on various teaching strategies for students with limited English proficiency. In addition, 40 states reported reserving a portion of state-level funds to provide professional development to assist teachers and other personnel in meeting state and local certification, endorsement and licensing requirements for teaching these students. For example, one state reported offering a seminar once per year that provided professional development hours that participants could use to meet state certification or endorsement requirements, and another state noted that it reimbursed teachers for tuition for courses that led to ESL endorsement. In addition, 49 states, the District of Columbia, and Puerto Rico reported reserving state-level funds for other activities, including providing technical assistance, planning, and administration (table 6). All 12 study states reported reserving state-level funds. While all study states reported reserving state-level funds for administration—including salaries for Title III staff—as well as for professional development and technical assistance, the majority of study states also reserved these funds for other activities, such as to develop guidance on English language proficiency standards. Similarly, in interviews with officials in 11 school districts and schools we visited in 6 of our study states, we found that Title III funds were used to support a variety of programs and activities for these students. Most districts we visited reported using Title III funds for the instructional program and materials as well as for professional development and assessments. In addition, districts used these funds to provide services, such as after-school tutoring or summer school programs, and for parent outreach activities, such as adult ESL classes or workshops on how to help your child succeed in school. For example, in one school district, we visited a high school that used Title III funds for two English for Speakers of Other Languages (ESOL) teachers and one teacher aide who worked with all of the school’s limited English proficient students. School officials also said that the county used Title III funds for a resource teacher who visited their school on a weekly basis to instruct teachers in ESOL strategies. The resource teacher also provided individualized pull-out instruction. This school also purchased computer- based learning software with Title III funds. NCLBA requires school districts to use a portion of Title III funds for language instruction programs for students with limited English proficiency and to provide professional development to teachers or other personnel. However, Education found issues related to these required uses during Title III-monitoring visits to seven states. For example, Education found that one of two districts visited in one state used all its Title III funds for teacher salaries and benefits. Education found that this issue arose due to a lack of familiarity with federal requirements and required the state to develop a corrective action plan. However, in the remaining 14 states monitored to date, Education did not find any issues related to the required uses. Officials in five study states and in 8 school districts in the six states we visited reported that difficulty hiring qualified teachers or other personnel that meet NCLBA requirements presented challenges to implementing effective programs. NCLBA requires public school teachers to be highly qualified in every core academic subject they teach and increased the level of funding to help states and districts implement teacher qualification requirements, including activities to help states and districts recruit and retain highly qualified teachers. However, officials in one district we visited noted that teacher transience in high-needs schools presents challenges because schools must continually provide training to new staff on strategies for teaching students with limited English proficiency. In another district, officials noted a particular challenge in locating qualified substitute teachers to work with these students when necessary. Prior GAO work also found that states and school districts were experiencing challenges implementing NCLBA’s teacher qualification requirements, including difficulties with teacher recruitment and retention. While we found that many of the hindrances reported by state and district officials could not be addressed by Education, Education had identified several steps it would take in its 2002-07 strategic plan related to these issues, including supporting professional development and encouraging innovative teacher compensation and accountability systems. Education’s oversight included Title III monitoring visits; twice yearly discussions with states on information they provide to Education, known as desk audits; and continuous informal monitoring in response to questions from states. As part of its oversight effort, Education implemented a monitoring program in 2005 to address each states’ administration of the Title III program. This monitoring effort was designed to provide regular, systematic reviews and evaluations of how states meet Title III requirements to ensure that they implement and administer programs in accordance with the law. Monitoring is conducted on a 3-year cycle, and as of September 2006, Education officials had monitored and reported on 20 states and the District of Columbia. Education officials reported that they plan to visit 17 more states in fiscal year 2007. As part of the monitoring visits, Education reviews states’ and districts’ implementation of NCLBA requirements, such as data to be included in required reports and required district uses of Title III funds. Education has found issues relating to a number of these requirements. For example, for 4 of the 20 states monitored and the District of Columbia, Education had findings related to the data that these states submitted in their Consolidated State Performance Reports. According to Education, 20 of the 21 monitoring reports had findings, and most states have developed corrective action plans to address them. Education officials stated that they are reviewing these plans and working with states to determine which findings have been appropriately addressed and to develop a time frame for resolving remaining findings. In addition, Education’s program officers perform semiannual reviews of states’ responses to sections of the Consolidated State Performance Report related to Title III and Biennial Evaluation Reports states submit to Education along with phone calls to state officials to address issues identified. For example, in October 2005 the program officers asked states how quickly they got the funding out to school districts because this was an area identified as a concern. Finally, Education officials explained that they provide informal, ongoing monitoring by addressing issues brought up by state officials throughout the year. Education offered support in a variety of ways to help states meet Title III requirements. Education held on-site and phone meetings to provide technical assistance to states, such as how to address the needs of those students having both limited English proficiency and disabilities. Education also held annual conferences focused on students with limited English proficiency that included sessions that provided information to state Title III directors and others on a variety of topics, such as NCLBA policies related to students with limited English proficiency and English language proficiency assessment issues. Education also held semiannual meetings and training sessions with state Title III directors, a nationwide Web cast on English language achievement objectives, and also videoconference training sessions for some state officials on how to meet Title III requirements. The department issued guidance on issues related to students with limited English proficiency on its Web site and also distributed information through an electronic bulletin board and a weekly electronic newsletter focused on students with limited English proficiency and through the National Clearinghouse for English Language Acquisition and Language Instruction Educational Programs. In addition, Education plans to provide assistance to individual states in developing appropriate goals for student progress in learning English through at least 3 of the 16 regional comprehensive centers the agency has contracted with to build state capacity to help school districts that are not meeting their adequate yearly progress goals. Officials from 5 of the 12 study states reported general satisfaction with the guidance, training, and technical assistance Education provided. However, one area that officials from seven of the study states identified as a challenge was addressing the needs of those students having both limited English proficiency and disabilities. Although Education issued guidance on including students with both limited English proficiency and disabilities in English language assessments and English proficiency goals, two states noted that the guidance does not specifically address how to serve those students with the most significant cognitive disabilities who also have limited English proficiency. Education estimates that nationwide about 1 percent of students have the most significant cognitive disabilities. An Education official stated that there is limited research on how to address this group of students, but Education is working with states and experts to explore the appropriate identification, assessment, placement, and interventions for such students. In addition, officials in 5 of the 12 study states thought more guidance was needed to develop English language proficiency assessments that meet NCLBA’s requirements. In our July 2006 report we found that Education has issued little written guidance on how states are expected to assess and track the English proficiency of these students, leaving some state officials unclear about Education’s expectations. We recommended that Education identify and provide the technical support states need to ensure the validity of academic assessments and publish additional guidance on requirements for assessing English language proficiency. Education agreed with our recommendations and has begun to identify the additional technical assistance needs of states and ways to provide additional guidance in these areas. NCLBA was enacted to ensure that all students have the opportunity to succeed in school, including meeting state academic content standards and language proficiency standards. However, if Education does not use the most accurate data as the basis of Title III-funding distribution, funds may be misallocated across states. NCLBA specifies that Education is to distribute funds based on the more accurate data source—Census’ ACS data or the number of students with limited English proficiency assessed annually. Because Education has not provided states with clear instructions on the portions of the Consolidated State Performance Report relevant to the collection of state data on the number of students with limited English proficiency assessed annually for English proficiency, it has been difficult for states to provide the data Education needs in order to consider the use of state data as the basis of distributing Title III funds. Until Education provides clear instructions, states may continue to provide inconsistent data. Once Education has provided such instructions and continues to work with states to improve data quality, the state data will be more reliable and complete. In addition, as Education completes its review of state-supplied school-year 2003-04 and 2004-05 data, it will be in a better position to consider the relative accuracy of the ACS and state data. However, without a methodology in place to assess the relative accuracy of these data sources, it is unclear how Education will determine which data to use as the basis of Title III-funding distribution. This is of particular concern, since without such a methodology, it will remain unknown how well either of the two data sources captures the population of children with limited English proficiency. In addition, ACS data have shown volatility—large increases and decreases—in the numbers of students with limited English proficiency from 2003 to 2004. While some volatility may be related to population fluctuations, some is related to the ACS sample size. Consequently, states may experience excessive fluctuations in their funding amounts from year to year. Some states may continue to see large fluctuations in the Title III funding when data based on the fulI ACS sample are introduced, when data are based on new annual population estimates are incorporated, and when data based on the 2010 Decennial Census become available. As a result, states affected by this volatility may be unable to plan effectively. To address the need for reliable and complete state data on the number of students with limited English proficiency assessed annually, we recommend that the Secretary of Education clarify the instructions on the portions of the Consolidated State Performance Report relevant to the collection of data on the number of students with limited English proficiency assessed annually for English proficiency. To strengthen the basis for Education’s distribution of Title III funds, we recommend that the Secretary of Education develop and implement a transparent methodology for determining the relative accuracy of the two allowable sources of data, ACS or state data on the number of students with limited English proficiency assessed annually, for Title III allocations to states. To address volatility in annual ACS data, we recommend that as part of NCLBA reauthorization, the Secretary should seek authority to use statistical methodologies, such as multiyear averages. We provided a draft of this report to Education for review and comment. In a letter, Education agreed with our recommendation regarding the need for reliable and complete data on the number of students with limited English proficiency assessed annually for English proficiency. The department stated that it has addressed this recommendation by revising the CSPR data collection form for the 2005-06 school year and by proposing additional changes to the 2007 CSPR (Part I) form. However, as stated in our report, Education did not provide documentation of the proposed changes. Further, it is not clear that the changes the department describes would result in complete and reliable data on the number of students with English proficiency assessed annually for English proficiency. We still recommend that Education review and clarify instructions to allow for an unduplicated count of students that would meet NCLBA requirements for use as a potential data source for funding. Regarding our second recommendation, Education agreed that it should develop a methodology to compare the relative accuracy of the two data sources, but stated that it should wait until the quality of state data improves. However, we encourage Education to take steps now to develop a methodology, since the department has been taking multiple steps to improve the quality and completeness of state data. In this way, Education will be positioned to determine which data source is the more accurate when state data has sufficiently improved. Finally, Education seemed to agree with our recommendation concerning the volatility of ACS data, but commented that the department did not have the legal authority to use multiyear averages of ACS data as the basis for distributing Title III funds. The department suggested that Congress might want to address this issue in the NCLBA reauthorization. As a result, we changed the recommendation to state that as part of NCLBA reauthorization, Education should seek authority to use statistical methodologies, such as multiyear averages, to address the volatility of ACS data. Education officials also provided technical comments that we incorporated into the report where appropriate. Education’s written comments are reproduced in appendix III. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors are listed in appendix IV. The following information was gathered from the National Clearinghouse of English Language Acquisition’s (NCELA) web site. NCELA identified various sources for the program descriptions. Harriet Ganson (Assistant Director) and Julianne Hartman Cutts (Analyst- in-Charge) and Nagla’a El-Hodiri (Senior Economist) managed all aspects of this assignment. R. Jerry Aiken, Melinda L. Cordero, and Elisabeth Helmer made significant contributions to this report. Tovah Rom contributed to writing this report. Jean McSween, Robert Dinkelmeyer, and Robert Parker provided key technical support. James Rebbe provided legal support. | Title III of the No Child Left Behind Act of 2001 (NCLBA) designates federal funds to support the education of students with limited English proficiency and provides for formula-based grants to states. This report describes the data the Education Department used to distribute Title III funds and the implications of data measurement issues for the two allowable sources of data-- American Community Survey (ACS) and state assessment data--for allocating funds across states. In addition, the report describes changes in federal funding to support these students under NCLBA and how states and school districts used these funds as well as Education's Title III oversight and support to states. To address these objectives, GAO reviewed documentation on ACS and state data, interviewed federal and state officials, and collected data from 12 states, 11 districts, and 6 schools. Education used ACS data to distribute Title III funds, but measurement issues with both ACS and state data could result in funding differences. Education used ACS data primarily because state data were incomplete. In September, Education officials told us they were developing plans to clarify instructions for state data submissions to address identified inconsistencies. While Education officials expected their efforts to improve the quality of the data, they told us that they had not established criteria or a methodology to determine the relative accuracy of the two data sources. State data represent the number of students with limited English proficiency assessed annually for English proficiency, and ACS data are based in part on responses to subjective English ability questions from a sample of the population. ACS data showed large increases and decreases in numbers of these students from 2003 to 2004 in part due to sample size. ACS data and state counts of students with limited English proficiency for the 12 study states differed. GAO's simulation of the distribution of Title III funds for fiscal years 2005 and 2006 based on these numbers showed that there would be differences in how much funding states would receive. In fiscal year 2006, Congress authorized over $650 million in Title III funding for students with limited English proficiency--an increase of over $200 million since fiscal year 2001 under NCLBA. This increase in funding as well as the change in how funds are distributed--from a primarily discretionary grant program to a formula grant program--contributed to more districts receiving federal funding to support students with limited English proficiency since the enactment of NCLBA. States and school districts used Title III funds to support programs and activities including language instruction and professional development. Education provided oversight and support to states. Officials from 5 of the 12 study states reported overall satisfaction with the support from Education. However, some officials indicated that they needed more guidance in certain areas, such as developing English language proficiency assessments that meet NCLBA's requirements. Education is taking steps to address issues states identified. |
SPAWAR is one of the Navy’s three major acquisition commands. SPAWAR provides information technology systems to naval forces on land, at sea, and in space and integrates all information products, including those developed by other systems commands and agencies outside the Navy. The SPAWAR workforce is over 7,300 personnel, and the fiscal year 2000 budget was $3.7 billion—about $2.7 billion to develop and procure systems and about $1.0 billion to operate and maintain them. Specifically, SPAWAR develops, acquires, and manages battle management systems (for example, software applications and undersea, terrestrial, and space sensors (for example, underwater sensors, navigation and weather systems, and satellites); information transfer systems (for example, communications systems, radios, antennas, and switches); and information management systems (for example, local area networks and routers). As of October 2000, SPAWAR was managing 21 programs that involved low-rate initial production. These programs had cumulative low-rate initial procurements ranging from 5 to 100 percent of the total inventory objective—7 systems were above 50 percent. The estimated total production costs for the eight systems we analyzed in detail ranged from $31 million to $525 million. As weapon system programs move through the phases of the acquisition process, they are subject to review at major decision points called milestones. Major defense acquisition programs, known as acquisition category I, and major systems programs, known as acquisition category II,as well as non-major systems programs, known as acquisition category III and IV, follow the same general process. DOD and Navy acquisition policies state that program risks shall be assessed at each milestone decision point before approval is granted for the next phase. The policies add that test and evaluation shall be used to determine system maturity and identify areas of technical risk. Major milestones in DOD’s systems acquisition process include Milestone 0, when the determination is made about whether an identified mission need warrants a study of alternative concepts to satisfy the need. If warranted, the program is approved to begin the concept exploration and definition phase. Milestone I, when the determination is made about whether a new acquisition program is warranted. If so, initial cost, schedule, and performance goals are established for the program, and authorization is given to start the demonstration and validation phase. Milestone II, when the determination is made about whether continuation of development, testing, and preparation for production is warranted. If so, authorization is given to start the engineering and manufacturing development phase. Approval of this phase will often involve a commitment to low-rate initial production, which is defined as the minimum quantity needed to (1) provide production- representative articles for operational testing and evaluation, (2) establish an initial production base, and (3) permit orderly ramp-up to full-rate production upon completion of operational testing and evaluation. Operational test and evaluation is a key internal control to ensure that decisionmakers have objective information available on a weapon system’s performance to minimize risks of procuring costly and ineffective systems. Operational testing and evaluation uses field tests under realistic conditions to determine the operational effectiveness and suitability of a system for use in combat. DOD acquisition regulations generally provide that programs successfully complete these tests before starting full-rate production. Milestone III, when operational test and evaluation has been completed, a determination is made about whether to proceed to full- rate production and field the system. Over the years, we have found instances in which DOD used the low-rate initial production decision phase to purchase significant numbers of major and non-major systems without successfully completing operational testing and evaluation. Often, these systems later experienced significant effectiveness and/or suitability problems. In 1994, we reported that DOD had made large buys of weapon systems during the low-rate initial production phase and prior to completion of operational test and evaluation, which resulted in operational problems. For example, we reported that the Navy procured 100 percent of a system’s inventory objective during low-rate initial production and later found that the system lacked critical hardware and software capabilities. In another case, the Navy procured 100 percent of a system’s inventory objective during low- rate initial production and later terminated the program when it failed operational testing and evaluation. A recent Defense Science Board report found that weapons systems are still being fielded without adequate testing to assure their effectiveness and utility to operating units. We conducted this review because buying systems before completing operational testing has inherent risks, and SPAWAR’s practice of buying high percentages of a system’s total inventory objective during low-rate initial production raised these risk concerns. SPAWAR officials cited three primary reasons for high-percentage buys during low-rate initial production. First, to meet Navy initiatives, SPAWAR must provide the fleet with large quantities of information technology systems as quickly as possible. Second, many information systems consist of evolving technology that quickly becomes obsolete. Third, additional low-rate initial production buys are approved due to delays in conducting the operational test and evaluation. The main reason that SPAWAR officials cited for high-percentage buys during low-rate initial production is the need to provide as many information technology systems to the fleet as quickly as possible to meet several Navy initiatives. The Navy’s current vision for the 21st century, Forward From The Sea, involves innovations in technology to rapidly transform the Navy into a 21st century force. SPAWAR provides or contributes to many of the operational capabilities that support the vision. Officials in the Chief of Naval Operations’ Fleet and Allied Requirements Division stated that the fleets put pressure on SPAWAR to provide information systems faster. These officials, as well as SPAWAR officials, contend that if SPAWAR does not provide systems to the fleet quickly, then the fleet will bypass the Chief of Naval Operations and SPAWAR and procure some systems with fleet funding. If the fleet buys the systems, SPAWAR cannot control the configuration of these systems, which can eventually result in interoperability problems with systems that SPAWAR procures. An official in the Office of the Deputy Assistant Secretary of the Navy responsible for communications and space systems also agreed that there is pressure on SPAWAR to meet fleet demands. He said that the Under Secretary of Defense for Acquisition, Technology, and Logistics stated about 3 years ago that the pace of developing systems was too slow and called for shortening the development cycle by incorporating evolutionary development and acquisition. Through evolutionary development and acquisition, systems are continuously improved, as new technology becomes available. According to the SPAWAR commander, another reason for quickly providing systems to the fleet is that information systems consist of rapidly advancing technology, which can become obsolete within 18 months. He said that procuring and fielding a large percentage of a system’s inventory objective while still in low-rate initial production quickly provide the fleet with better information systems and provide important operational data so that any system performance problems can be quickly fixed. Officials in the Navy Fleet and Allied Requirements Division said that the 18-month obsolescence cycle is the main reason that the fleet would rather have a system now with 75 to 80 percent of its full capability as opposed to waiting until the system has all of its capability. However, officials in the Chief of Naval Operations’ Office of Test, Evaluation, and Technology Requirements disputed that rapidly advancing technology is a legitimate reason for making high-percentage buys during low-rate initial production and before completing operational testing. These officials concluded that making high-percentage buys during low- rate initial production circumvents the operational testing and evaluation process and increases the risk that systems will not work as intended when fielded. High-percentage buys during low-rate initial production also were the result of delays in conducting the operational test and evaluation. According to the SPAWAR commander, the pass or fail nature of operational testing contributed to delays. Rather than fail a test and risk program reduction or termination, operational tests were delayed until there was a good probability that the system would pass the tests. As tests were delayed, additional low-rate initial production buys were approved. Six of the eight systems we analyzed had additions to the original low-rate initial production quantities. The SPAWAR commander said that making high-percentage buys of a system while still in low-rate initial production is low risk when proven commercial technology items are being procured and they are relatively low-cost items—when compared to the cost of ships and aircraft. Further, if problems arise after the low-rate initial production systems are fielded, the cost to fix them is not significant. However, SPAWAR officials agreed that none of the eight systems we analyzed are entirely commercial and that all of them have military requirements that must be tested in a realistic operational environment. The DOD Director of Operational Test and Evaluation agreed that low-rate initial production items can be used in an operational environment to learn about problems and fix them, but he said that procuring large quantities of a system before operational test and evaluation is a risky strategy. We analyzed eight SPAWAR systems and found seven of them had a combination of problems that adversely affected fleet operations—all had performance problems, all had interoperability problems, and six had suitability problems. A performance problem is the inability of a system to effectively and efficiently perform its assigned mission. An interoperability problem is the inability of systems to work together effectively to provide services to and accept services from other systems. A suitability problem involves a system not satisfactorily meeting one or more requirements, including reliability, maintainability, logistics support, or training. These problems may delay progress in achieving the Navy’s vision for using information technology to attain and maintain network-centric warfighting knowledge and decision-making superiority. The last of the eight systems had not been installed when we analyzed the eight systems. Table 1 illustrates the types of problems identified for seven of the eight systems. Two of the systems—the Digital Wideband Transmission System and the Command and Control Processor—illustrate how interoperability, performance, and suitability problems impact fleet operations. The Digital Wideband Transmission System is a radio transmission system supporting voice, video, and data communications. It is required on all aircraft carriers and amphibious ships and at training facilities. SPAWAR approved 100 percent of the total inventory objective to be bought under low-rate initial production; however, the system does not work due to a number of problems with the antenna, power amplifier, and radio frequency control. The system also created, and was affected by, electromagnetic interference, which caused severe interoperability problems. For example, the interference caused a complete loss of Global Positioning System navigation capability. Furthermore, in order for the digital system to work, an air-search radar had to be shut down. Consequently, the Navy turned off the system the first time it was used during an amphibious ready group deployment in the Pacific, replaced it with a legacy system, and placed the digital system in an inoperative status for 9 months. By November 2000, SPAWAR had installed 78 percent of the systems in the fleet, but it will cost $4.3 million to fix the problems for this $40-million program— $1.2 million for engineering work and $3.1 million for retrofit costs. SPAWAR is currently developing and testing improvements to system performance. The Command and Control Processor acquires information from other communications systems, stores the information, and reformats it for use by the Aegis combat system on aircraft carriers and naval combat surface ships. Although SPAWAR originally approved a small purchase of the processor during the low-rate initial production, three subsequent low-rate initial production increases bought the total to 41 percent of the inventory objective. From 1995 to 2000, there were 263 problems noted with the system, mostly involving software, during battle group system integration tests. The processor has severe suitability problems because it breaks down unpredictably up to 12 hours at a time (due to software problems) and freezes up, which eliminates the system’s capability to provide current situational awareness. Also, on an aircraft carrier that we visited, operators said that the processor would not integrate with other systems, even though it is designed to do so. To prevent the breakdowns, SPAWAR developed a workaround procedure, which involves resetting the system every 2 hours instead of every 24 hours. In addition, the breakdowns and workarounds put more pressure on operators and maintainers during combat or hostile situations. Some of SPAWAR’s 13 other systems with a high percentage of low-rate initial production buys also had operational effectiveness and suitability problems. The Pacific Fleet experienced 46 problems with the Navy Extremely High Frequency Satellite Communications System from 1995 to 2000. The problems involved hardware and software, interoperability, and training. For example, in February 2000, system performance was degraded due to a part failure. In April 2000, SPAWAR reported that the problem had been solved. However, about a month later, the system had problems again, resulting in no response from the satellite and time- tracking errors being returned. The system was again placed in a degraded status and, as of August 2000, the problem was still unresolved. The Pacific Fleet also experienced problems with the High Frequency Radio Group, mainly due to system performance problems and training shortfalls. For example, on a ship visit in October 2000, ship communications personnel said that the system had broken down several times for a duration of 1 week to 1 month at a time. They said that, when it breaks down, the operators must tune in the radio frequency manually, but ship operators have not been trained to do this because they were used to relying on the system to tune into a particular frequency automatically. The Pacific Fleet also identified several other problems with the radio group, including a ship that experienced 15 failures of a system switch within 11 months. The SPAWAR commander said that the Command did not have adequate controls and oversight, at the time of most of these low-rate initial production decisions, to either mitigate or manage risks associated with procuring and fielding large percentages of systems during low-rate initial production. He said the need for more discipline in the acquisition process contributed to the interoperability, performance, and suitability deficiencies we identified. He further noted, however, that some problems are part of the cost of doing business with new systems and are worth the risk to provide systems to the fleet quickly. According to the SPAWAR commander, the most meaningful measure of success is whether the systems are meeting their operational requirements, and he said that SPAWAR’s systems are meeting theirs based on a performance parameter called operational availability. However, according to the DOD Director of Operational Test and Evaluation and the Chief of Naval Operations’ Office of Test, Evaluation, and Technology Requirements, operational availability is only one of a number of key performance measures, and an overall assessment of system performance should not be based solely on that parameter. The Navy and SPAWAR have taken or plan to take a number steps to mitigate the risks of large low-rate initial production procurements. To add more discipline and rigor to the low-rate initial production decision process, the Command now requires program managers to use a standardized checklist and report template as part of reviewing and approving low-rate initial production purchase requests. SPAWAR has also established an Acquisition Reform Office to serve as a focal point and command-wide disseminator of lessons learned and process improvements. The reform office is currently developing a “Rules of the Road” Acquisition Guidebook for SPAWAR program managers. Further, in discussions of our findings and observations during this review, the commander called for the development of risk management guidance for information systems and agreed to suggested improvements in documenting and justifying low-rate initial production decisions. The SPAWAR commander said better risk management guidance would improve low-rate initial production decisions on information systems, especially when milestone decision authorities and program managers rotate in and out over time. The commander stated that he and the program managers primarily use their acquisition knowledge, wisdom, and experience when making risk management decisions. In discussions with Navy and SPAWAR officials, we noted that the Acquisition Decision Memorandums, used to document and support milestone decisions, did not always include the low-rate initial production quantity being approved, the cumulative number of low-rate initial production items that had been approved, or the cumulative low-rate initial production percentage. We also noted that, at SPAWAR, the justification for approving low-rate initial production purchases was not always documented. The Navy and SPAWAR officials agreed to document in the Acquisition Decision Memorandum the justification for all low-rate initial production approvals, as well as the current inventory objective and the cumulative number of units bought under low-rate initial production. SPAWAR also agreed to include in its quarterly program status report when cumulative low-rate initial production approvals reach 50 percent or more of the current inventory objective. Recognizing that other Navy activities can benefit from the low-rate initial production decision checklist and report template, we recommended that this guidance be distributed throughout the Navy. The Navy subsequently distributed the guidance DOD-wide. Finally, the Navy and SPAWAR agreed to supplement acquisition training for program managers and staff by incorporating risk management tools into existing courses. In seeking to provide new information systems to the fleet as quickly as possible, SPAWAR officials procured and fielded relatively large quantities of systems during low-rate initial production and before completing operational testing. Our subsequent review of seven of these systems found that six had experienced operational problems that negatively impacted the fleet. The SPAWAR commander noted that controls and oversight, at the time of most of these decisions, were not adequate to either mitigate or manage risks associated with procuring and fielding large percentages of systems during low-rate initial production. He said the need for more discipline in the acquisition process contributed to deficiencies we identified. Since that time, Navy and SPAWAR officials have taken or have plans to take a number steps to mitigate the risks of large low-rate initial production procurements. In addition, they have agreed to implement, and in one case have already implemented, process improvements we suggested during the course of this review. Given these actions, we are not making any recommendations in this report. In commenting on our draft report, the Navy agreed and stated that the actions taken and planned by it and the SPAWAR Command are expected to improve the Navy’s low-rate initial production decision process. The Navy’s comments appear in appendix I. To acquire information about the number and status of SPAWAR low-rate initial production programs, we interviewed officials and obtained documentation from the SPAWAR Acquisition Reform Office; selected SPAWAR program offices; the Office of the Assistant Secretary of the Navy (Research, Development, and Acquisition); the Office of the Deputy Assistant Secretary of the Navy (Command, Control, Communications, Computers, and Information/Electronic Warfare/Space); the Office of the Deputy Director Defense Procurement Strategies; the Office of the Under Secretary of Defense (Acquisition and Technology); and the Office of the Chief of Naval Operations. To obtain detailed information about the impact of the high percentage of SPAWAR low-rate initial production procurements, we selected and reviewed 14 programs, representing 70 percent of all SPAWAR low-rate initial production programs. Of these 14 programs, we examined 8 programs in detail looking at operational, logistic, interoperability, and training issues to determine how well these programs were performing. To obtain information about the operational testing, evaluation, interoperability, and fielding of low-rate initial production systems, we interviewed officials and obtained documentation from the Office of the Director of Navy Test, Evaluation, and Technology Requirements; the Office of the Navy Commander Operational Testing and Evaluation Force; the Office of the Program Manager for Battle Group Systems Integration Testing; and the Office of the Director of Operational Test and Evaluation, Office of the Assistant Secretary of Defense. To obtain information about the operation, performance, interoperability, maintenance, repair, retrofit, suitability, and training regarding low-rate initial production systems in the fleet, we visited the Naval Surface Force Command, Pacific; the Naval Air Command, Pacific; and the Naval Submarine Command, Pacific, Squadron Eleven. We also visited specific ships in each command, including the U.S.S. Benfold (DDG-65), the U.S.S. Pearl Harbor (LSD-52), the U.S.S. John C. Stennis (CVN-74), and the U.S.S. Salt Lake City (SSN-716). In addition, we held discussions with SPAWAR program officials and officials from SPAWAR’s In-Service Engineering Activity, Fleet Support Engineering Team, and Installations and Logistics Directorate. To obtain information about the laws, regulations, procedures, and guidance governing the procurement of information technology systems in low-rate initial production, we interviewed officials and obtained documentation from the Office of the Commander, Space and Naval Warfare Systems Command; the Office of the Chairman Deskbook Working Group (DOD 5000 Rewrite); the Assistant Secretary of the Navy (Research, Development and Acquisition); and the Office of the Assistant Deputy Under Secretary of Defense (Systems Acquisition). We also reviewed selected laws and regulations governing low-rate initial production, including title 10 of the U.S. Code, the DOD 5000 series acquisition regulations (the1996 and revised 2000 version), and Secretary of the Navy Instruction 5000-2B governing acquisition and procurement of low-rate initial production and commercial-off-the-shelf technology. We conducted our review from June 2000 through May 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees; the Honorable Donald H. Rumsfeld, Secretary of Defense; and the Honorable Mitchell E. Daniels Jr., Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-4821 if you have any questions regarding this report. Key contributors to this report were Cristina Chaplain, Joe Dewechter, Dorian Dunbar, Stephanie May, Gary Middleton, Sarah Prehoda, Richard Price, and William Woods. | During its review of the Navy's Space and Naval Warfare (SPAWAR) Systems Command's fiscal year 2001 budget request, GAO found that many information technology systems were being procured and fielded in relatively large quantities--sometimes exceeding 50 percent of the total--during low-rate initial production and before completion of operational testing. The primary purpose of low-rate initial production is to produce enough units for operational testing and evaluation and to establish production capabilities to prepare for full-rate production. Commercial and Department of Defense (DOD) best practices have shown that completing a system's testing before producing significant quantities substantially lowers the risk of costly fixes and retrofits. For major weapons systems, statutory provisions limit the quantities of systems produced during low-rate initial production to the minimum quantity necessary. These statutory provisions also require justification for quantities exceeding 10 percent of total production. Although these provisions do not apply to non-major systems, DOD and Navy acquisition regulations encourage these programs to make use of the low-rate initial production concept. This report reviews (1) information systems being procured and fielded for SPAWAR in large numbers before operational testing, (2) what effects this practice was having on SPAWAR and the fleet, and (3) what the Navy is doing to mitigate the risks associated with this practice. GAO found that the main reason for the high percentage of low-rate initial production quantities is to more quickly respond to fleet demands for information systems improvements. Many information technology systems purchased and fielded during low-rate initial production and prior to completing operational testing experienced problems that negatively impacted fleet operations and capabilities. SPAWAR has taken several steps to mitigate the risks of high percentage low-rate initial production procurements, such as requiring program managers to use a standardized checklist and establishing an Acquisition Reform Office to serve as a focal point and command-wide disseminator of lessons learned and process improvements. |
In November 2002, Congress passed and the President signed the Improper Payments Information Act of 2002 (IPIA), which was later amended by IPERA and the Improper Payments Elimination and Recovery Improvement Act of 2012 (IPERIA). IPIA, as amended, requires federal executive branch agencies to (1) review all programs and activities and identify those that may be susceptible to significant improper payments, (2) estimate the annual amount of improper payments for susceptible programs and activities, (3) implement actions to reduce improper payments and set reduction targets, and (4) report on the results of addressing the foregoing requirements. Section 3 of IPERA also calls for executive agencies’ IGs to annually determine and report on whether their respective agencies complied with the following six criteria: publish a report in the form and content required by OMB—typically an AFR or a PAR—for the most recent fiscal year, and post that report on the agency website; conduct a program-specific risk assessment for each program or activity that conforms with IPIA as amended; publish improper payment estimates for all programs and activities deemed susceptible to significant improper payments under the agency’s risk assessment; publish corrective action plans for those programs and activities assessed to be at risk for significant improper payments; publish and meet annual reduction targets for all programs and activities assessed to be at risk for significant improper payments; and report a gross improper payment rate of less than 10 percent for each program and activity for which an improper payment estimate was published. OMB plays a key role in overseeing the implementation of improper payments legislation. OMB is directed by statute to provide guidance to federal agencies on estimating, reporting, reducing, and recovering improper payments, and has also issued guidance to agencies on improving improper payment estimates as required by IPERIA. In October 2014, OMB issued new guidance on improper payments that changed certain requirements for fiscal year 2014 reporting, such as extending the reporting period for IG IPERA reports and also eliminating an additional criterion for IGs to assess whether agencies have reported on efforts to recapture improper payments. Per this guidance, an agency’s IG is required to submit a report on its assessment of the agency’s compliance with the criteria listed in IPERA, as applicable, to the head of the agency, the Senate Committee on Homeland Security and Governmental Affairs, the House Committee on Oversight and Government Reform, the Comptroller General, and the OMB Controller within 180 days of the publication of the agency’s annual PAR or AFR. IPERA states that if an IG reports that an agency is not in compliance with any of the IPERA criteria for 1 fiscal year, the agency head must submit a plan to appropriate congressional committees and OMB describing the actions that the agency will take to come into compliance. If an agency is found noncompliant with respect to the same program for 2 consecutive years, IPERA directs OMB to review the program and determine if additional funding would help bring the program into compliance and, if so, directs the agency to use any available reprogramming or transfer authority, or request further reprogramming or transfer authority from Congress, to aid in the program’s remediation efforts. For programs determined to be noncompliant for more than 3 consecutive years, the agency is required by IPERA to submit to Congress within 30 days of the IG’s report either (1) a reauthorization proposal for the program or (2) proposed statutory changes necessary to bring the program or activity into compliance. In addition, OMB’s guidance stipulates that OMB may require agencies that are noncompliant to complete additional requirements. For example, OMB could require that the agency re-evaluate or re-prioritize its corrective actions, intensify and expand existing corrective action plans, or implement or pilot new tools and methods to prevent improper payments. OMB is required to annually identify a list of high-priority federal programs in need of greater oversight and review. In general, OMB has implemented this requirement by designating high-priority programs based on a threshold of $750 million in estimated improper payments for a given year. OMB guidance directs IGs at executive branch agencies with high-priority programs, as part of their annual compliance reviews, to (1) evaluate the agency’s assessment of risk level and quality of the improper payment methodology for estimation; (2) determine the extent of oversight needed; and (3) provide the agency with recommendations, as necessary, for improving its methodology, internal controls, or level of program access and participation. In addition to the laws and guidance noted above, the Disaster Relief Appropriations Act, 2013 requires that all programs receiving funds appropriated by that act be deemed susceptible to significant improper payments, which consequently requires the agencies responsible for these programs to estimate improper payments, implement corrective actions, and report on their results for these programs. For fiscal year 2014, 15 of the 24 CFO Act agency IGs reported their agencies as noncompliant with one or more of the IPERA Section 3 criteria. This is the largest number of agencies deemed noncompliant under IPERA since IGs began reporting on their agencies’ compliance with these criteria in fiscal year 2011. Further, this represents an increase of 4 agencies reported to be noncompliant compared to fiscal year 2013. A total of 38 programs accounting for a reported $100.6 billion in estimated improper payments were responsible for identified instances of noncompliance in fiscal year 2014. Figure 1 summarizes agencies’ reported compliance by IPERA criterion for fiscal year 2014. Based on our review of IG IPERA reports for fiscal year 2014, we found that the causes for noncompliance most commonly reported were related to the IPERA provisions regarding publishing and meeting planned improper payment reduction targets and reporting improper payment error rates below 10 percent. Specifically, 12 of the 24 CFO Act agencies did not meet one or both of these criteria. These results are similar to those reported by the IGs in fiscal year 2013. The reports also showed that most agencies complied with other IPERA criteria, such as publishing required information in a PAR or AFR and publishing corrective action plans for relevant programs. Table 1 displays each agency’s compliance status for each IPERA criterion, as reported by its IG, for fiscal year 2014. The most common reason for reported noncompliance under IPERA in fiscal year 2014, as in fiscal year 2013, was the failure to publish and meet annual improper payment reduction targets. Eleven agencies did not meet this criterion in fiscal year 2014. Specifically, 10 agencies published reduction targets but failed to meet them, and 1 agency—the Department of Labor (DOL)—failed to comply because it did not publish an improper payment reduction target rate for its Unemployment Insurance benefit program. DOL IG reported that the agency did not publish an annual reduction target for this program because DOL was awaiting additional guidance and consultation regarding estimation methodology with OMB. Other IGs reported that factors affecting compliance with this criterion included challenges in maintaining adequate and complete supporting documentation and conducting adequate reviews. For example, IGs at the U.S. Department of Agriculture (USDA), the Department of Health and Human Services (HHS), and the General Services Administration (GSA) reported that administrative errors, documentation errors, or both prevented the agencies from meeting their reduction targets. USDA, HHS, the Department of Defense (DOD), and the Department of Transportation (DOT) IGs reported that factors such as mistakes in completing vouchers, inadequate reviews before payment, lack of grantee awareness of documentation requirements, and difficulty complying with new requirements in legislation contributed to these agencies not meeting their reduction targets. HHS reported that new legislative requirements for its Medicare Fee-for-Service and Medicaid programs contributed to the programs’ noncompliance. The USDA IG also reported that a flawed sampling method for USDA’s Federal Crop Insurance Corporation Program Fund resulted in its failure to meet its reduction target because one program component reported a 27 percent error rate. Anomalies in samples for the Department of Veterans Affairs (VA) Veterans Health Administration’s Civilian Health and Medical Program were considered by the agency to be the cause for noncompliance in meeting reduction targets because such sampling issues disproportionately skewed improper payment error rates upwards. IG reports also indicated that an agency’s failure to meet reduction targets may not necessarily suggest that the agency was not adequately monitoring its programs’ improper payments. For example, the VA, Small Business Administration (SBA), and USDA IGs reported increases in improper payment error rates because of factors such as improved sampling and emphasis on training, which enhanced their agencies’ ability to detect improper payments. Six of the 11 agencies whose IGs reported noncompliance with the criterion to publish or meet reduction targets have no reported noncompliance with the other IPERA criteria. The second most common reason for noncompliance under IPERA, as reported by IGs for fiscal year 2014, was agencies’ inability to report improper payment error rates for all programs below 10 percent—a threshold which five CFO Act agencies did not meet for at least one of their programs or activities. Of the 119 programs at CFO Act agencies reporting $124.5 billion of estimated improper payments for fiscal year 2014, a total of 10 programs reported improper payment error rates of greater than 10 percent. Had these programs decreased their reported error rates to 10 percent, the government-wide improper payment estimate would have been $23.1 billion, or 18.6 percent, lower. This potential reduction is largely accounted for by 2 programs—the Department of the Treasury’s (Treasury) Earned Income Tax Credit (EITC) and HHS’s Medicare Fee-for-Service. If EITC and Fee-for-Service reported error rates at the 10 percent threshold, the fiscal year 2014 government-wide improper payment estimate would be lowered by $11.18 billion and $9.74 billion, respectively. Agencies’ IGs reported that factors affecting compliance with this criterion included challenges in complying with documentation requirements and administrative and documentation errors. For example, USDA and HHS IGs reported that administrative and documentation errors in processing payments prevented these agencies from reporting improper payment error rates below the 10 percent threshold. USDA IG also reported that additional payment testing criteria increased the error rate, and HHS IG reported that the provider community experienced issues in complying with certain documentation requirements for some services. Table 2 summarizes agency programs that reported error rates above the 10 percent threshold for fiscal year 2014. Although the 10 percent threshold was not achieved by 10 programs, some programs in this category still reported improvements in fiscal year 2014. For example, the SBA IG reported that even with the agency’s Disaster Assistance Loans program’s error rate exceeding the 10 percent threshold, the agency reduced the program’s error rate from 18.4 percent in fiscal year 2013 to 12.0 percent in fiscal year 2014. Reported factors that helped improve SBA’s error rate included multilayer payment reviews and improved staff training. According to the fiscal year 2014 IG IPERA reports, overall agency compliance with IPERA criteria has reached its lowest point since IGs began annual reporting. Specifically, in fiscal year 2011—the first year of reporting—14 agencies did not comply with at least one of the IPERA criteria. While the number of agencies reported noncompliant actually improved in fiscal years 2012 and 2013, decreasing to 12 and 11 agencies, respectively, IGs reported an increase in agency noncompliance to 15 agencies in fiscal year 2014. Consequently, IGs reported the greatest number of noncompliant agencies in fiscal year 2014. Figure 2 summarizes the number of CFO Act agencies noncompliant under IPERA each year since fiscal year 2011, as reported by their IGs, and table 3 details individual agencies’ compliance with IPERA criteria, as reported by their IGs, for fiscal years 2011 through 2014. IG reports showed areas where agency compliance has remained a challenge throughout the years. For example, noncompliance with the criteria to publish and meet annual reduction targets has been at the same level over the 4 years since IPERA was implemented; 11 agencies did not comply with this criterion in each year. Some programs, such as USDA’s School Breakfast and Special Supplemental Nutrition Program for Women, Infants, and Children programs, have not met their reduction targets for each of the last 4 years. Treasury’s EITC program has also reported error rates among the highest in the government, ranging from 22.7 to 27.2 percent since fiscal year 2011. The number of agencies that have been noncompliant with the 10 percent criterion has been 5 or more for each fiscal year. Another criterion with which agency noncompliance increased in fiscal year 2014 was the requirement to publish estimates for all programs deemed susceptible to significant improper payments. Specifically, 4 agencies’ IGs determined that their agencies did not comply with this requirement—Department of the Interior (DOI), VA, USDA, and HHS. The first three IGs determined that certain programs did not have complete, accurate, or reliable improper payment estimates for at least one program, while HHS was unable to publish an estimate for its Temporary Assistance for Needy Families program (TANF) for the fourth year in a row, reportedly because of statutory limitations that prevent HHS from requiring states that administer the program to participate in improper payment measurement. While the increase in agency noncompliance with some criteria contributed to the increasing number of noncompliant agencies reported, there were also improvements in compliance with other criteria. For example, in fiscal year 2011, five IGs reported that their agencies did not publish corrective action plans; in fiscal year 2014, only one IG—HHS— reported that its agency did not publish necessary corrective action plans. IGs also noted further improvements in the criteria for agencies to conduct risk assessments for all programs susceptible to significant improper payments. In fiscal year 2011, five IGs reported that their agencies did not fulfill the requirement to conduct program-specific risk assessments; in fiscal year 2014, only three IGs reported that their agencies were noncompliant. Figure 3 shows cumulative CFO Act agency compliance by IPERA criterion for fiscal years 2011 through 2014. IGs at nine CFO Act agencies determined that 18 programs were noncompliant with IPERA criteria for at least 3 consecutive years as of fiscal year 2014. When a program is reported as noncompliant by its IG for 3 or more consecutive years, the responsible agency is required to submit proposals to Congress within 30 days to reauthorize the program or change the statute that established it. However, we found that only three of the nine agencies submitted the required information to Congress in response to 3 or more years of consecutive noncompliance in fiscal year 2014. As of fiscal year 2014, IGs reported that there were a total of 38 programs determined to be noncompliant: 17 programs were noncompliant for 1 year, 3 programs noncompliant for 2 consecutive years, and 18 programs were noncompliant for 3 consecutive years or more. Table 4 lists the CFO Act agency programs determined by their agencies’ IGs to be noncompliant with IPERA criteria for 3 or more consecutive years, as of fiscal year 2014. As previously discussed, IPERA requires agencies that have been deemed noncompliant with IPERA criteria for consecutive years to take certain actions. If a program is found to be noncompliant by an agency’s IG for more than 3 consecutive years with respect to the same program or programs, the agency must submit to Congress within 30 days of such determination a reauthorization proposal for each noncompliant program or any proposed statutory changes it deems necessary to bring the program into compliance. OMB guidance instructs agencies with “three or more” years of reported noncompliance to submit this information, thereby including those with exactly 3 years of reported consecutive noncompliance. Additionally, OMB guidance states that agencies should share these proposals or plans with their respective IGs. Overall, we found that some agencies complied with these requirements with varying degrees of detail, while others did not submit the required information to Congress. These are detailed below. Three agencies fulfilled the requirement in IPERA and the related OMB guidance by submitting proposals for reauthorization or statutory change to Congress, although they did not always submit them within the required time frames. For example, when the Treasury IG reported in its fiscal year 2013 report that the Internal Revenue Service’s EITC program had been noncompliant with IPERA criteria each year since 2011 because it reported error rates exceeding 10 percent, the agency submitted proposals to Congress in response. However, this action was not taken until August 2014, surpassing the 30-day period for submission that began in April 2014 when the Treasury IG issued its compliance determination in its report. Although the agency fulfilled the requirement in IPERA to submit legislative proposals to Congress, the Treasury IG further recommended that the agency submit a more comprehensive plan to Congress, including corrective actions to be implemented to correct noncompliance in the EITC program. This plan was submitted to Congress in June 2015. DOL’s Unemployment Insurance program had first reached 3 consecutive years of noncompliance in fiscal year 2013 and was again found to be noncompliant in fiscal year 2014. After 3 consecutive years of noncompliance for the program as of fiscal year 2013, DOL officials stated that they began preparing a legislative package; however, this package was never transmitted to Congress. When the agency’s Unemployment Insurance program was deemed noncompliant for 4 consecutive years as of fiscal year 2014, DOL produced legislative proposals and sent a letter to Congress, although its submissions also surpassed the 30-day deadline following the IG’s report. Specifically, DOL submitted a letter to Congress regarding Unemployment Insurance improper payments on September 30, 2015, and submitted legislative proposals to OMB in November 2015, which were later included in the President’s Budget for fiscal year 2017 that was transmitted to Congress. HHS accounted for two programs that were noncompliant with IPERA criteria for 3 or more consecutive years as of fiscal year 2014: the Medicare Fee-for-Service program and TANF. HHS submitted letters to Congress containing legislative proposals and information on corrective actions for both programs, and also stated that it is working with OMB to develop an alternative approach for devising an improper payment estimate for TANF, which has been unable to report an estimate in any year because of statutory limitations that prevent HHS from requiring states that administer the program to participate in improper payment measurement. Although IPERA and OMB guidance require agencies to submit proposals for reauthorization or statutory changes after 3 consecutive years of a program’s reported noncompliance with the act’s criteria, four agencies did not do so because they concluded that reauthorization or legislative changes were not necessary. We were told by certain agency officials that implementing corrective actions at the agency level is the only way to bring their programs into compliance under IPERA because legislative provisions, in some cases, are not responsible for agencies failing to meet the IPERA criteria. Additionally, we were told that reauthorization of a program is not always practical or necessary for every program. For example, USDA did not submit proposals to Congress for some of its noncompliant programs because, according to agency officials, the time frame established by statute for reauthorization did not coincide with the requirement in IPERA to submit such proposals. For fiscal year 2014, the USDA IG reported five programs at USDA as noncompliant with IPERA criteria for 4 consecutive years. The IG recommended in its fiscal year 2014 report that three of these programs administered by USDA’s Food and Nutrition Service (FNS)—the Child and Adult Care Food Program, the National School Lunch Program, and the School Breakfast Program— submit proposals for legislative changes to Congress. The IG had previously made this recommendation in its fiscal year 2013 report after these programs were noncompliant for 3 consecutive years. Despite USDA’s response at the time that it would issue guidance to its agencies to comply with this recommendation, no proposals were submitted to Congress. In its fiscal year 2014 response, FNS noted that its opportunities to suggest reauthorization proposals for the National School Lunch and School Breakfast programs are limited to times of reauthorization, which occur every 5 to 6 years and may not coincide with the timing of the IPERA requirement. FNS stated that it submits budget proposals each year as part of USDA’s annual budget process, but some of these proposals for fiscal year 2016 related to funding aimed at reducing improper payments were not included in USDA’s final budget. Further, FNS did not submit a reauthorization proposal for its Special Supplemental Nutrition Program for Women, Infants, and Children in response to 3 and 4 years of consecutive noncompliance in fiscal years 2013 and 2014, respectively, though the agency stated in its official response to the IG’s fiscal year 2014 IPERA compliance report that it was involved in reauthorization discussions with USDA. USDA’s fifth program reported as noncompliant for 3 consecutive years in fiscal year 2014—the Direct and Counter-Cyclical Payments program— was repealed by the February 2014 enactment of the Agricultural Act of 2014. Because this program is no longer authorized to receive appropriations, the need for a reauthorization proposal no longer exists. SBA officials stated that the agency also did not submit proposals for reauthorization or statutory changes to Congress because its noncompliant programs—7(a) Guaranty Loan Approvals and Disaster Assistance Loans—are permanently authorized and thus do not require reauthorization. Because of this, SBA officials told us, submitting reauthorization proposals to Congress would be inapplicable and unnecessary. The agency also noted that statutory changes were not proposed for these programs because they would not reduce their improper payment error rates or address the root causes for improper payments. SBA officials stated that they consider corrective actions at the agency to be the most appropriate solution to achieving IPERA compliance in future years. The SBA IG further indicated that corrective actions appeared to be effective at reducing improper payments for the Disaster Assistance Loans program, resulting in the IG changing the status of corrective actions to address its management challenge related to the Disaster Assistance Loans program in fiscal year 2015 to “implemented.” Officials at DOT stated that the agency did not submit proposals for reauthorization or statutory changes because such actions would not help the agency achieve compliance with IPERA criteria for its Federal Transit Administration Formula Grants program, which failed to meet its improper payment reduction target for 3 consecutive years. The agency noted that in its 2014 agency financial report, it did not identify any statutory or regulatory barriers that would prevent the agency from implementing corrective actions to reduce improper payments. Submitting a proposal for reauthorization or statutory changes to the Formula Grants program would appear to contradict this assessment and was unnecessary, according to DOT. DOD did not submit proposals for reauthorization or statutory changes to Congress in response to 3 consecutive years of noncompliance in its Travel Pay program as of fiscal year 2014. In its response to the IG’s fiscal year 2014 IPERA compliance report, DOD officials stated that the agency’s root causes of noncompliance are covered by existing internal controls and regulations that management would ensure were implemented and enforced. In December 2015, DOD issued an internal memorandum addressing the need for internal controls and training intended to reduce improper payments within the Travel Pay program, rather than submitting proposals to Congress. Although officials at some of the agencies maintain that reauthorization or statutory change will not achieve compliance under IPERA, by not reporting to Congress as required when an agency does not comply for 3 consecutive years and informing Congress of the agency’s challenges with achieving compliance under IPERA, Congress is limited in its ability to monitor the law’s implementation and ensure that its intent is being fulfilled. Standards for Internal Control in the Federal Government also states that management should ensure an adequate means of communicating information that may have a significant impact on the agency to external stakeholders. Two agencies—the Department of Homeland Security (DHS) and the Social Security Administration (SSA)—were not initially required to submit proposals for reauthorization or statutory changes to Congress in response to fiscal year 2014 noncompliance because their respective IGs initially did not report the agencies as noncompliant for 3 consecutive years. Because the legal requirement to report to Congress is triggered by the IG reporting noncompliance, rather than the noncompliance itself, these two agencies are not subject to the congressional reporting requirement until such time as each IG updates its determination. These instances are detailed later in this report. We found that in conducting their IPERA compliance reviews for fiscal year 2014, certain IGs did not consistently adhere to the requirements contained in IPERA and other applicable laws, such as the Disaster Relief Appropriations Act, 2013 as well as the guidance provided by OMB to clarify IPERA criteria and establish reporting requirements for OMB- designated high-priority programs. Specifically, we found, for fiscal year 2014, that the IGs did not always determine compliance as required by IPERA, summarize agency compliance as directed by OMB guidance, assess their agencies’ high-priority programs in accordance with OMB guidance, and report determinations of compliance for disaster relief programs that reported improper payment estimates. Based on our review of IG IPERA reports, we noted that five IGs did not fully adhere to OMB’s guidance for conducting their IPERA compliance reviews. As previously noted, OMB Circular A-123, Appendix C, contains guidance for IGs to use in carrying out their reviews of agencies’ improper payment information as required by IPERA. Specifically, this guidance directs each agency IG’s IPERA report to include a high-level summary toward the beginning of the report that (1) indicates which of the six specific criteria contained in IPERA the agency did and did not comply with and (2) clearly states the agency’s compliance status overall. In accordance with IPERA, OMB’s guidance states that if an agency does not meet one or more of the six IPERA criteria for any one or more of its programs, it is considered noncompliant overall under IPERA. IPERA does not support a finding of partial compliance by an IG. Table 5 lists the instances in which IGs did not adhere to OMB’s implementing guidance for high-level summaries in their fiscal year 2014 reports. We found that 4 of the 24 IGs did not specify in their reports which of the six IPERA criteria their agencies complied with and did not. Specifically, IGs at DOI, the Department of State (State), GSA, and SSA did not fulfill this requirement. For example, the GSA IG noted that GSA reported inaccuracies in its fiscal year 2014 AFR information and did not complete corrective actions from the previous year’s review, but it was unclear if these instances resulted in determinations of compliance or noncompliance with the IPERA criteria for publishing a PAR/AFR in accordance with OMB guidance or publishing corrective actions. We also found that IGs at 3 of the 24 CFO Act Agencies—DOI, State, and DOT—did not clearly state whether their agencies were overall compliant or noncompliant with the IPERA criteria. Specifically, DOI IG’s report failed to include an explicit statement of overall agency-level compliance, while the language in the IG reports regarding agency-level compliance under IPERA at State and DOT was unclear. Although the State IG’s report concluded that State “was in substantial compliance with improper payment requirements,” we learned from OMB officials that the State IG determined that the agency was compliant under IPERA. The DOT IG’s report statement was also inconclusive, stating that DOT’s improper payment reporting “generally complies with IPERA requirements.” However, this statement is misleading because the IG also reported that the agency did not comply with one of the IPERA criteria because two DOT programs failed to meet their improper payment reduction targets for fiscal year 2014. As noted above, IPERA defines compliance as including all six of the listed criteria, and OMB guidance requires an IG to state the agency’s overall compliance status in its report. OMB guidance states that IG compliance reviews are an important component of the accountability of improper payment efforts. Additionally, Standards for Internal Control in the Federal Government provides that timely, relevant, and reliable communications and information are needed for an agency to achieve all of its objectives. Not adhering to guidance for preparing IPERA compliance reports by failing to report concrete compliance determinations reduces the comparability and consistency of such reports and therefore the reports’ usefulness to the agency. Further, when an IG does not adhere to OMB guidance and statutory requirements by making (1) unclear statements on overall compliance or (2) positive statements of compliance when an agency meets the majority, but not all, of the IPERA criteria, agency officials may incorrectly conclude their agency’s compliance status and therefore delay taking corrective actions. For example, as noted above, we discovered that because the DOI IG did not clearly state that its agency was noncompliant under IPERA, DOI officials were unaware of their agency’s noncompliant status in fiscal year 2014 until later. Such instances could delay the implementation of corrective actions to remediate instances of noncompliance, contributing to continued noncompliance the following year. As noted above, two agencies—DHS and SSA—were not initially required to submit proposals for reauthorization or statutory changes to Congress in response to 3 or more consecutive years of noncompliance as of fiscal year 2014 because their respective IGs initially did not report the agencies as noncompliant for 3 consecutive years. However, in response to our audit findings that identified instances of noncompliance the IGs had not previously reported, one of the IGs subsequently determined that its agency had been noncompliant for 3 or more consecutive years. Until the IG revises its determination, the other agency is not subject to the congressional reporting requirement to submit proposals for reauthorization or statutory changes. Specifically, during our audit work we found that several DHS programs had not met their improper payment reduction targets in fiscal years 2011 through 2014. The DHS IG agreed and subsequently reissued its IPERA compliance reports for each of these fiscal years from February through April 2016 to reflect determinations of noncompliance. Upon the reissuance of the IG reports, OMB advised DHS to take the actions required under IPERA in response to 1 year of noncompliance for fiscal year 2014. OMB noted that requiring the agency to take the respective actions required for 1, 2, and 3 years of noncompliance in the same year would be challenging for the agency and unlikely to yield meaningful results. Similarly, the SSA IG reported in fiscal years 2011 and 2012 that the agency was compliant with the six IPERA criteria, but we noted that its Supplemental Security Income (SSI) program failed to meet its reported reduction target in both years. In its fiscal year 2013 and 2014 reports, the SSA IG reported the SSI program as noncompliant with IPERA criteria because the program did not meet its reduction targets. SSA IG officials told us that they have no plans to reissue fiscal year 2011 or 2012 IPERA compliance reports to state a conclusion of noncompliance, thus the requirement for the agency to take action based on this noncompliance was not triggered based on fiscal year 2014 results. When IGs do not make compliance determinations in accordance with the IPERA criteria, agencies are unable to take the appropriate steps required by the law when programs have been noncompliant for consecutive years. As a result, Congress is not informed of consistently noncompliant programs. In fiscal year 2014, OMB designated 13 programs with total estimated improper payments of $115.3 billion as high priority. These high-priority programs account for 92.5 percent of the total government-wide improper payment estimate. IPERIA amended IPIA to direct OMB to annually identify a list of high-priority programs in need of greater levels of oversight and review. In general, OMB has implemented this requirement by designating a program as high priority when its estimated improper payments exceed $750 million in the most recent fiscal year. OMB requires agencies with high-priority programs to develop supplemental measures on an annual or more frequent basis, and to explain how they have tailored their corrective actions to better reflect the specific processes, procedures, and risks surrounding those programs. Furthermore, the agency IG is required to (1) evaluate the agency’s assessment of the program risk level and the quality of the improper payment estimates and methodology, (2) determine the extent of oversight warranted, and (3) provide the agency head with recommendations. However, we found that the SSA IG did not report on its evaluation of SSA’s improper payment rate for SSI and Old-Age, Survivors, and Disability Insurance, the agency’s two high-priority programs. The report did, however, include recommendations to mitigate the main root cause of SSI overpayments. We also found that the HHS IG reported its evaluation of four of five HHS high-priority programs, but it did not fulfill the requirement for the Children’s Health Insurance Program. Table 6 shows all programs deemed high priority by OMB for fiscal year 2014. When IGs do not fully evaluate high-priority programs as directed by IPERIA and OMB guidance, their compliance reports are incomplete and therefore of less value in communicating deficiencies to be addressed to the agencies. Specifically, by not reviewing the agencies’ risk assessments for high-priority programs and the quality of the programs’ improper payment estimates and methodologies, the IGs missed the opportunity to provide their agencies with any recommendations for improving internal controls and preventing and reducing improper payments in those programs. We found that for fiscal year 2014 2 of 16 CFO Act agency IGs—SSA and the Department of Housing and Urban Development (HUD)—did not comply with the requirement to assess compliance for disaster relief programs reporting improper payment estimates. Specifically, the SSA IG’s report did not assess its agency’s Hurricane Sandy Disaster Relief program’s compliance with IPERA criteria, although SSA as an agency reported an estimate of the program’s improper payments in its fiscal year 2014 AFR. Similarly, the HUD IG’s report did not include a determination of HUD’s Community Development Block Grant - Disaster Relief program’s compliance with IPERA criteria, even though HUD included improper payment estimates and related information for this program in its fiscal year 2014 AFR. The Disaster Relief Appropriations Act, 2013 requires all programs or activities receiving disaster relief funding appropriated by that act to be considered susceptible to significant improper payments for purposes of IPIA until those funds are expended. Out of a total of 19 federal agencies that received disaster relief funding under the act, 16 are CFO Act agencies. Because funds from this act are automatically considered susceptible to significant improper payments under IPIA, those agencies’ IGs are required to include their assessments of those programs’ compliance under IPERA in their annual reports. Table 7 lists the 16 CFO Act agencies that received funds under the Disaster Relief Appropriations Act, 2013. When IGs do not fully comply with IPERA, the Disaster Relief Appropriations Act, and OMB guidance by failing to assess disaster relief programs’ compliance under IPERA, there is an increased risk that agencies’ improper payment estimate reporting is inaccurate or incomplete, thereby undermining the agencies’ ability to effectively develop and implement corrective action plans for programs and increasing the risk of potential future improper payments. Additionally, IG monitoring and assessment of improper payments in federal disaster relief programs could help ensure that emergency disaster relief funding is distributed to the citizens who need it. OMB issued the latest iteration of its Circular No. A-123, Appendix C, in October 2014. As stated previously, this guidance changed certain requirements for fiscal year 2014 reporting, extended the reporting period for IG IPERA reports, and eliminated an additional criterion for IGs to assess whether agencies have reported on efforts to recapture improper payments. This guidance also attempted to make IG determinations of compliance and noncompliance more clear and concise by requiring that high-level summaries of compliance be included in their reports, both overall and by IPERA criteria. As part of its annual monitoring of agency improper payments estimation and reporting, OMB reviews the IGs’ compliance determinations for each criterion, IG recommendations and the agencies’ responses, and areas where additional follow-up or guidance may be needed. In addition to these reviews, OMB responds to questions from the agencies and IGs throughout the year and refers them to the applicable guidance on an as- needed basis. In October 2015, upon observing that many of the IPERA compliance reports from fiscal year 2014 and prior years were still difficult to interpret and compare, OMB held a town hall for all federal IGs to clarify its implementation guidance contained in OMB Circular No. A-123, Appendix C. The town hall detailed the six IPERA criteria and how IGs can make clearer determinations of compliance and noncompliance in their upcoming fiscal year 2015 reviews that were due in May 2016. OMB officials received feedback from IGs that this town hall was useful in tailoring their IPERA reviews; therefore, OMB officials stated that they plan to conduct another town hall for IGs in the summer or fall of 2016. To determine if IGs corrected deficiencies identified in preliminary GAO findings shared with them and followed the guidance provided in the October 2015 OMB town hall meeting, we conducted a review of the fiscal year 2015 IPERA reports issued in May 2016 by the seven IGs in whose fiscal year 2014 reports we identified deficiencies. We verified that these deficiencies have been corrected for fiscal year 2015 reporting and therefore determined that no recommendations to these IGs are warranted. OMB officials also stated that they plan to review the compliance status of all agencies contained in the IGs’ fiscal year 2015 IPERA compliance reports and identify areas where additional guidance is needed. Estimated improper payments across the federal government have increased by over $30 billion in the last 2 fiscal years. During the same time period, agency noncompliance with the criteria listed in IPERA, as determined by IGs, also increased, with the highest number of CFO Act agency IGs reporting noncompliance in fiscal year 2014. IPERA compliance reviews serve a key function in helping to ensure that federal dollars are not misspent and that estimates of improper payments are accurate and complete. In order to allow Congress to effectively monitor compliance with IPERA criteria, it is important for agencies to keep relevant committees notified of the noncompliant status of their programs. In the past year, OMB has made efforts to clarify its IPERA implementation guidance to IGs and federal agency chief financial officers and address shortfalls in the accuracy and completeness of the IGs’ reports. OMB is attempting to address these shortfalls through its communications with the IGs, and officials told us that the next planned town hall meeting will occur later in the year. To help fulfill the IPERA and OMB requirements to submit proposals to Congress when agencies reach 3 or more consecutive years of noncompliance with IPERA criteria, we recommend the following four actions. We recommend that the Secretary of Agriculture or a designee submit a letter to Congress detailing proposals for reauthorization or statutory changes in response to 3 consecutive years of noncompliance as of fiscal year 2014 for its (1) Child and Adult Care Food Program; (2) School Breakfast Program; (3) National School Lunch Program; and (4) Special Supplemental Nutrition Program for Women, Infants, and Children. To the extent that reauthorization or statutory changes are not considered necessary to bring a program into compliance, the Secretary or designee should state so in the letter. We recommend that the Administrator of the Small Business Administration or a designee submit a letter to Congress detailing proposals for reauthorization or statutory changes in response to 3 consecutive years of noncompliance as of fiscal year 2014 for the agency’s (1) 7(a) Guaranty Loans program and (2) Disaster Assistance Loans program. To the extent that reauthorization or statutory changes are not considered necessary to bring the programs into compliance, the Administrator or designee should state so in the letter. We recommend that the Secretary of Transportation or a designee submit a letter to Congress detailing proposals for reauthorization or statutory changes in response to 3 consecutive years of noncompliance as of fiscal year 2014 for the agency’s Federal Transit Administration’s Formula Grants program. To the extent that reauthorization or statutory changes are not considered necessary to bring the program into compliance, the Secretary or designee should state so in the letter. We recommend that the Secretary of Defense or a designee submit a letter to Congress detailing proposals for reauthorization or statutory changes in response to 3 consecutive years of noncompliance as of fiscal year 2014 for its DOD Travel Pay program. To the extent that reauthorization or statutory changes are not considered necessary to bring the program into compliance, the Secretary or designee should state so in the letter. We provided a draft of this report to OMB, the IG offices of the 24 CFO Act agencies, and the CFO offices of those agencies with programs that were determined to be noncompliant with IPERA criteria for 3 consecutive years as of fiscal year 2014: DOD, DOL, DOT, HHS, SBA, SSA, Treasury, and USDA. These eight agencies included all 4 of the agencies to which we made recommendations. We received responses from all organizations that were provided the draft report. Table 8 summarizes the responses received from these the 24 CFO Act agencies and their IG offices. DOD and the DOD IG provided a combined response, which is reprinted in appendix IV. OMB’s written comments are reprinted in appendix XIII. As noted in the table, some of the agencies’ CFO and IG offices also provided technical comments, which we incorporated as appropriate. We also received e-mailed responses from officials at the following offices, stating that the organization had no comments on the draft report: the CFO offices of SBA and SSA; and the IG offices of USDA, the Department of Commerce, the Department of Education, the Department of Energy, the Department of Justice, DOL, the Environmental Protection Agency, NSF, the Nuclear Regulatory Commission, SBA, and the U.S. Agency for International Development. In their written comments, the CFO offices of DOD and DOT concurred with our recommendations. The DOD CFO office noted that it does not consider reauthorization or statutory change necessary for its Travel Pay program, but will submit a letter to Congress containing planned actions for improvement by August 30, 2016. The DOT CFO office stated that DOT establishes aggressive reduction targets for its programs and noted improvement in its Formula Grants program, which met its reduction target in fiscal year 2015 after being noncompliant under IPERA for 3 consecutive years. In their e-mailed responses, officials from CFO offices at SBA and USDA neither concurred nor disagreed with our recommendations. In their written comments, IGs at DOD, DHS, DOI, State, and VA concurred with our findings. OMB provided written comments that reiterated its commitment to improving payment accuracy across the federal government. The HHS IG concurred with our finding in an e- mailed response. In its written response, the GSA IG office stated that it partially agreed with the findings in our report. While the GSA IG office agreed that its fiscal year 2014 high-level summary of IPERA compliance could be improved, it noted that its summary paragraph specifically identified the criteria that GSA did not meet. However, as we noted in this report, the GSA IG reported inaccuracies in GSA’s fiscal year 2014 AFR information and a failure to complete corrective actions from the previous year’s review, but it was unclear if these instances resulted in determinations of compliance or noncompliance with IPERA criteria. OMB guidance directs IGs to not only report on the IPERA criteria that the IG has determined the agency did not meet, but also those criteria the agency met. The GSA IG office further stated that it has taken actions to clarify its high-level summary in its fiscal year 2015 IPERA compliance report. In an e-mailed response dated June 1, 2016, the DOT Assistant IG for Information Technology and Audits disagreed with our finding that the IG’s determination that DOT “generally complied” with IPERA criteria was misleading and also stated that the magnitude of DOT’s instances of noncompliance was not conveyed in our report. However, as stated in our report, IPERA defines compliance as including all six of the listed criteria, and thus if an IG reports that any of the IPERA criteria are not met by its agency, the agency is overall noncompliant. Our report further conveys the one criterion and program responsible for DOT’s noncompliance in fiscal year 2014. Despite its disagreement with our findings, the DOT IG office reported a clearer determination of agency compliance in its fiscal year 2015 report, stating that the agency “(did) not comply with IPERA requirements”. As discussed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Director of the Office of Management and Budget, all CFO Act agencies’ inspectors general, and select CFO Act agencies’ chief financial officers. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix XIV. Our objectives were to review (1) the number of agencies, among those listed in the Chief Financial Officers Act of 1990, as amended (CFO Act), that complied with the criteria listed in the Improper Payments Elimination and Recovery Act of 2010 (IPERA), as reported by their inspectors general (IG), for fiscal years 2011 through 2014, and what criteria and programs the IGs concluded were primarily responsible for instances of agency noncompliance; (2) the number of programs at the 24 CFO Act agencies that were determined noncompliant with IPERA criteria by their IGs for 3 or more consecutive years, as of fiscal year 2014, and the extent to which the responsible agencies submitted the required information to Congress; and (3) the extent to which CFO Act agency IGs have adhered to certain IPERA requirements and the related Office of Management and Budget (OMB) guidance contained in OMB Circular No. A-123, Appendix C, in their fiscal year 2014 improper payment compliance reviews, including reporting on special disaster relief appropriations and OMB-designated high-priority programs. Although IPERA requirements apply to the head of each executive agency, we only reviewed reports of those agencies designated as CFO Act agencies because these agencies represented over 99 percent of the total government-wide improper payments reported in fiscal year 2014. To address our first objective, we identified the requirements that agencies must meet by reviewing the Improper Payments Information Act of 2002, as amended; IPERA; and OMB guidance. We analyzed CFO Act agency IGs’ fiscal year 2014 IPERA reports, which were the most current reports available at the beginning of our review, and summarized information related to agency compliance with IPERA criteria and identified common findings and related causes for improper payments, as reported by the IGs. We also relied on and reviewed prior year supporting documentation and analyses of CFO Act agencies’ IG IPERA reports for fiscal years 2011, 2012, and 2013, as reported in GAO’s December 2014 report (GAO-15-87R), and compared agencies’ compliance with each IPERA criterion over fiscal years 2011 through 2013, as reported by the IGs. To summarize the CFO Act agencies’ noncompliant programs for fiscal years 2011 through 2014, we compared data from the IG IPERA reports and agencies’ performance and accountability reports (PAR) and agency financial reports (AFR) for those years. We also determined the programs responsible for noncompliance over this period by analyzing and summarizing the determinations made in the IG reports. Our work did not include validating or retesting the data or methodologies used by the IGs in coming to their conclusions. We confirmed our findings with the relevant CFO Act agency IGs and OMB. We also obtained and summarized OMB and agencies’ data on improper payment estimates by agency program (see app. III). To address our second objective, we summarized IG determinations made in the IGs’ annual IPERA compliance reports from fiscal year 2011 through fiscal year 2014. We corroborated our findings with OMB and the relevant CFO Act agency IGs. To determine if agencies responsible for these noncompliant programs had submitted either proposals for reauthorization or statutory changes to Congress, we interviewed and requested information from relevant agency offices of chief financial officer in coordination with the agency IGs. We did not make conclusions as to the sufficiency or completeness of the information contained in proposals for reauthorization or statutory changes submitted to Congress. To address our third objective, we identified requirements that agencies’ IGs must meet by reviewing IPERA, the Improper Payments Elimination and Recovery Improvement Act of 2012, and OMB guidance for IG IPERA reports, which is contained in OMB Circular No. A-123, Appendix C (OMB Memorandum M-15-02). We compared CFO Act agencies’ IGs improper payment reporting for fiscal year 2014 to statutory requirements and OMB guidance, including reporting on high-priority programs and disaster relief funds. To determine the population of OMB’s high-priority programs, we obtained the list for fiscal year 2014 from www.paymentaccuracy.gov. To ensure that this list was reported correctly on the website, we interviewed OMB officials and corroborated the information. For each agency responsible for a high-priority program, we reviewed the related IG’s IPERA compliance report for fiscal year 2014 to ensure that the IG’s review of the high-priority program met all elements prescribed by OMB Memorandum M-15-02. For agencies reporting improper payment estimates for disaster relief funding, we reviewed the Disaster Relief Appropriations Act, 2013 and determined whether the agencies listed therein reported improper payment estimates and whether their IGs reported compliance determinations for those programs in their fiscal year 2014 IPERA reports. We also compared the structure and content of the fiscal year 2014 IG reports to that required by IPERA and OMB Memorandum M-15-02. Further, we reviewed the improper payments reporting contained in the AFRs and PARs of the CFO Act agencies for fiscal year 2014 to ensure that certain IG compliance determinations for some criteria agreed to our observations. Specifically, in each fiscal year 2014 CFO Act agency AFR or PAR, we determined whether the following IPERA Section 3 criteria were met for each program assessed to be at risk for significant improper payments: (1) corrective action plans were reported, (2) annual reduction targets were published and met, and (3) a gross improper payment estimate of less than 10 percent was reported for each program. For the remaining criteria related to publishing the required information in the PAR or AFR, conducting risk assessments, and publishing improper payment estimates for programs deemed susceptible to significant improper payments, we did not make conclusions but relied on the IGs’ judgments of compliance and noncompliance. IGs gave their respective agencies the opportunity to comment on their fiscal year 2014 IPERA compliance reports, and we reviewed all agency and IG responses. To determine if certain IGs corrected deficiencies identified in preliminary GAO findings shared with them and followed OMB guidance for fiscal year 2015 reporting, we reviewed the fiscal year 2015 IPERA reports issued in May 2016 by the seven IGs in whose fiscal year 2014 reports we identified deficiencies. We determined that the conclusions in the IGs’ reports were sufficiently reliable for our reporting purposes. We conducted this performance audit from September 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 9 lists the Chief Financial Officers Act of 1990 agencies and their programs that their inspectors general reported in fiscal year 2014 were noncompliant with the Improper Payments Elimination and Recovery Act of 2010. In addition to the contact named above, Philip McIntyre (Assistant Director), Laura Bednar (Auditor-in-Charge), Maria Belaval, Wilfred Holloway, Jason Kelly, Jason Kirwan, and Ricky A. Perry, Jr., made key contributions to this report. | IPERA calls for executive branch agencies' IGs to annually determine whether their agencies complied with six criteria related to the estimation of improper payments, including conducting risk assessments, publishing corrective action plans, and meeting annual reduction targets. In the last 2 fiscal years, total estimated improper payments reported by federal agencies have increased considerably. Specifically, improper payment estimates across the government for fiscal year 2015 totaled $136.7 billion, over $30 billion higher than the estimated total for fiscal year 2013. GAO was asked to review compliance under IPERA as reported by IGs for fiscal year 2014. This report examines to what extent the 24 CFO Act agency IGs (1) reported that agencies complied with the IPERA criteria for fiscal years 2011 through 2014, and what criteria and programs were responsible for agency noncompliance; (2) reported programs to be noncompliant for 3 consecutive years as of fiscal year 2014, and whether agencies submitted the required information to Congress; and (3) adhered to statutory requirements and OMB guidance for reporting on fiscal year 2014 IPERA compliance reviews. For fiscal year 2014, 15 of the 24 Chief Financial Officers Act (CFO Act) agency inspectors general (IG) determined that their agencies did not comply with criteria in the Improper Payments Elimination and Recovery Act of 2010 (IPERA). This is the largest number of CFO Act agencies reported as noncompliant under IPERA since the requirement for IGs to report on their agencies' compliance was implemented in fiscal year 2011, and represents an increase of 4 agencies from fiscal year 2013. In fiscal year 2014, IGs reported 38 programs accounting for $100.6 billion in estimated improper payments as responsible for instances of noncompliance. Agency noncompliance for fiscal year 2014 was largely due to agencies failing to meet improper payment reduction targets or to report improper payment error rates at less than 10 percent for all programs. If the 5 agencies with programs exceeding 10 percent error rates had reported error rates under the threshold set in IPERA, the government-wide improper payment estimate would have been $23.1 billion, or 18.6 percent, lower. In addition, 18 programs at 9 agencies were reported as noncompliant with IPERA criteria by their agencies' IGs for at least 3 consecutive years as of fiscal year 2014. Agencies with programs reported as noncompliant for 3 consecutive years are required to submit proposals to Congress to reauthorize the programs or change the statutes that established them. However, GAO found that only 3 agencies submitted such information to Congress. When agencies do not report to Congress as required, Congress is limited in its ability to monitor the implementation of IPERA and ensure that its intent is being fulfilled. Certain IGs also did not fully adhere to Office of Management and Budget (OMB) guidance or statutory requirements for IPERA reporting for fiscal year 2014 by either failing to (1) clearly state the agency's compliance status overall and with each of the six criteria, (2) report on programs designated high priority by OMB as necessary, or (3) report compliance determinations for disaster relief programs. In the past year, OMB has made efforts to clarify its guidance to IGs. To determine if IGs made changes in response to OMB's efforts and deficiencies identified in GAO's preliminary findings shared with them, GAO reviewed select fiscal year 2015 IPERA reports issued by IGs in May 2016. GAO concluded that the IGs corrected the issues identified during this review in their fiscal year 2015 IPERA reports, and no recommendations to IGs are warranted. GAO recommends that four agencies submit proposals as required to Congress in response to 3 years of noncompliance with IPERA criteria. The Departments of Defense and Transportation concurred with GAO's recommendations and the Department of Agriculture and the Small Business Administration stated that they had no comments on the draft report. |
DOD has not effectively managed important aspects of the requirements for DIMHRS (Personnel/Pay) to ensure that they are complete, correct, and unambiguous. Requirements are the foundation for designing, developing, and testing a system. Incorrect or incomplete requirements have been commonly identified as a cause of systems that do not meet their cost, schedule, or performance goals. Disciplined processes and controls for the definition and management of requirements are defined in published models and guides, such as the Capability Maturity Models developed by Carnegie Mellon University’s Software Engineering Institute and standards developed by the Institute of Electrical and Electronics Engineers. DOD’s management of DIMHRS (Personnel/Pay) requirements had several shortcomings: First, DOD did not initially ensure that the system requirements and system design were aligned. DOD required the contractor to base the system design only on high-level (more general) requirements, providing detailed requirements to the contractor for information only. However, according to the program office, the system design should be based on the detailed requirements, and following our inquiries, the program office began tracing backward and forward between the detailed requirements and the system design to ensure consistency. Among other things, DOD is analyzing financial system standards to ensure that all applicable standards are included in the requirements and design for DIMHRS (Personnel/Pay). Without consistency between requirements and design, the risk is increased that the developed and deployed system will not fully satisfy financial system standards and users’ needs. Second, DOD did not ensure that the detailed requirements include important content and that they are clear and unambiguous. The requirements for the interfaces between DIMHRS (Personnel/Pay) and existing systems are not yet complete because DOD has not yet determined the extent to which legacy systems will be replaced and thus require modification in order to interact with the new system. Furthermore, DOD is still determining whether the data requirements provided to the contractor for system design are complete. Finally, about 77 percent of the detailed requirements are difficult to understand, based on our review of a random sample of the requirements documentation. Our review showed that this documentation did not consistently provide a clear explanation of the relationships among the parts of each requirement (business rules; information requirements; and references to regulations, laws, standards, and so on) or adequately identify the sources of data required for computations. If requirements are not complete and clear, their implementation in the system is not likely to meet users’ needs. Third, DOD has not obtained user acceptance of the detailed requirements. As we have pointed out, when business process changes are planned, users’ needs and expectations must be addressed, or users may not accept change, which can jeopardize the effort. One way to ensure and demonstrate user acceptance of requirements is to obtain sign-off on the requirements by end-user representatives. However, although the DIMHRS (Personnel/Pay) program obtained the user organizations’ formal acceptance of the high-level requirements, the process used to define the detailed requirements has not resulted in such acknowledgment of agreement on the requirements. Program officials stated that gaining formal agreement from some of the user organizations would delay the program and be impractical because of end users’ reluctance to accept a set of joint requirements that requires end users to make major changes in their current ways of processing military personnel and pay actions. We have previously observed this challenge and have stated that DOD’s organizational structure and embedded culture work against efforts to modernize business processes and implement corporate information systems such as DIMHRS (Personnel/Pay) across component lines. Nevertheless, not attempting to obtain agreement on DIMHRS (Personnel/Pay) requirements increases the risk that users will not accept and use the developed and deployed system, and that later system rework will be required to make it function as intended DOD-wide and achieve stated military human capital management outcomes. According to DIMHRS (Personnel/Pay) officials, a number of actions have been taken to reduce the risk that users will not accept the system, including conducting numerous focus groups, workshops, demonstrations, and presentations explaining how the DIMHRS (Personnel/Pay) software product could address DOD’s existing personnel/pay problems. However, DIMHRS (Personnel/Pay) officials stated that support for the system by the services’ executives is mixed. For example, the officials said that (1) Army executives are committed to implementing and using the DIMHRS (Personnel/Pay) system because they believe it will address many problems that the Army currently faces; (2) Air Force officials generally support the system but say they do not yet know whether the system will meet all their needs; and (3) Navy and Marine Corps executives are not as supportive because they are not fully convinced that DIMHRS (Personnel/Pay) will be an improvement over their existing systems. The shortcomings in DOD’s efforts to effectively manage DIMHRS (Personnel/Pay) requirements are attributable to a number of causes, including the program’s overly schedule-driven approach and the difficulty of overcoming DOD’s long-standing cultural resistance to departmentwide solutions. These shortcomings leave DOD without adequate assurance that the requirements will accurately reflect the end users’ needs and that the resulting system design is reflective of validated requirements that will fully meet DOD’s needs. DOD does not have a well-integrated structure for managing DIMHRS (Personnel/Pay), which DOD has described as an integrated program, and it is not following some key supporting processes for acquiring COTS-based business systems. Program responsibility, accountability, and authority are diffused. Leading organizations ensure that programs are structured to ensure that a single entity has clear authority, responsibility, and accountability for the program. For DIMHRS (Personnel/Pay), these are spread among three key stakeholder groups whose respective chains of command do not meet at any point below the Secretary and Deputy Secretary of Defense levels. Responsibility for requirements definition rests with a joint requirements development office, which is accountable through one chain of command. Responsibility for system acquisition rests with the program office, which is accountable through another chain of command. Responsibility for preparing for transition to the new system rests with the end-user organizations—11 major DOD components reporting through five different chains of command. This is consistent with our earlier observation that DOD’s organizational structure and embedded culture have not adequately accommodated an integrated, departmentwide approach to joint systems. Without a DOD-wide integrated governance structure for a joint, integrated program like DIMHRS (Personnel/Pay), the risk is increased that the program will not produce an integrated set of outcomes. The system has not been defined and designed according to a DOD-wide integrated enterprise architecture. In accordance with the National Defense Authorization Act for Fiscal Year 2003, DOD has been developing a departmentwide Business Enterprise Architecture (BEA), and it has been reviewing some programs, such as DIMHRS (Personnel/Pay) with proposed obligations of funds greater than $1 million, for consistency with the BEA. In April 2003, the DOD Comptroller certified DIMHRS (Personnel/Pay) to be consistent with the BEA on the basis of the program manager’s commitment that the yet-to-be-developed system would be designed to be consistent with the yet-to-be-developed architecture. To follow through on this commitment, DOD included a requirement in the DIMHRS (Personnel/Pay) contract that the systems specification be compatible with the emerging BEA. DIMHRS (Personnel/Pay) officials recognize that the April 2003 architectural certification is preliminary and stated that DIMHRS (Personnel/Pay) will undergo another certification before the system deployment decision. By that time, however, lengthy and costly design and development work will have been completed. The real value in having and using an architecture is knowing during system definition, design, and development what the larger blueprint for the enterprise is, so that these can be guided and constrained by this frame of reference. Aligning to the architecture after the system is designed would require expensive system rework to address any inconsistencies with the architecture. Program stakeholders’ activities have not been managed according to a DIMHRS (Personnel/Pay)-integrated master plan/schedule. An effective master plan/schedule should allow for the proper scheduling and sequencing of activities and tasks, allocation of resources, preparation of budgets, assignment of personnel, and criteria for measuring progress. However, the DIMHRS (Personnel/Pay) program plan/schedule is based on the contractor’s and program office’s activities and does not include all the activities that end-user organizations must perform to prepare for DIMHRS (Personnel/Pay), such as the redesign of legacy systems and interfaces, business process reengineering, and workforce change management. Without a true master plan/schedule of activities that includes all DOD program stakeholders, the risk increases that key and dependent events, activities, and tasks will not be performed as needed, which in turn increases the risk of schedule slippage and program goal shortfalls. Some, but not all, best practices associated with acquiring COTS-based business systems are being followed. An example of a best practice that DOD is following is to discourage the modification of commercial software components without thorough justification; DOD’s contract includes award fees that give the contractor incentives to, among other things, minimize the customization of the COTS software. An example of a best practice that DOD is not following is to ensure that plans and schedules explicitly provide for preparing users for the new business processes associated with the commercial components. DOD does not have an integrated program plan/schedule that provides for end-user organization activities that are associated with preparing users for the changes that the system will introduce. Because it is not following all best practices associated with acquiring COTS-based systems, DOD is increasing the risk that DIMHRS (Personnel/Pay) will not be successfully implemented and effectively used. DOD’s efforts to employ an integrated program management approach have not been effective for a number of reasons, including DOD’s long- standing cultural resistance to departmentwide solutions. Without an integrated approach and effective processes for managing a program that is intended to be an integrated solution that maximizes the use of commercially available software products, DOD increases the risk that the program will not meet cost, schedule, capability, and outcome goals. The importance of DIMHRS (Personnel/Pay) to DOD’s ability to manage military personnel and pay services demands that the department employ effective processes and governance structures in defining, designing, developing, and deploying the system to maximize its chances of success. For DIMHRS (Personnel/Pay), however, DOD did not initially perform important requirements-development steps, and the detailed system requirements are missing important content. DOD has begun to remedy these omissions by taking actions such as tracing among requirements documents and system design documents to ensure alignment, but user organizations’ acceptance of requirements has not occurred. Moreover, although DIMHRS (Personnel/Pay) is to be an integrated system, it is not being governed by integrated tools and approaches, such as an integrated program management structure, integrated DOD business enterprise architecture, and an integrated master plan/schedule. Furthermore, while DOD is appropriately attempting to maximize the use of COTS products in building DIMHRS (Personnel/Pay) and is following some best practices for developing COTS-based systems, others are not being followed. The absence of the full complement of effective processes and structures related to each of these areas can be attributed to a number of causes, including the program’s overly schedule-driven approach and the difficulty of overcoming DOD’s long-standing cultural resistance to departmentwide solutions. Effectively addressing these shortcomings is essential because they introduce unnecessary risks that reduce the chances of accomplishing DIMHRS (Personnel/Pay) goals on time and within budget. It is critical that DOD carefully consider the risks caused by each of these areas of concern and that it appropriately strengthen its management processes, structures, and plans to effectively minimize these risks. To do less undermines the chances of timely and successful completion of the program. To assist DOD in strengthening its program management processes, structures, and plans and thereby increase its chances of successfully delivering DIMHRS (Personnel/Pay), we recommend that you direct the Assistant Secretary (Networks and Information Integration), the Under Secretary (Personnel and Readiness), and the Under Secretary (Comptroller), in collaboration with the leadership of the military services and DFAS, to take the following six actions to jointly ensure an integrated, coordinated, and risk-based approach to all DIMHRS (Personnel/Pay) definition, design, development, and deployment activities. At a minimum, this should include ensuring that joint system requirements are complete and correct, and that they are acceptable to user organizations; establishing a DOD-wide integrated governance structure for DIMHRS (Personnel/Pay) (1) that vests an executive-level organization or entity representing the interests of all program stakeholders—including the Joint Requirements and Integration Office, the Joint Program Management Office, the services, and DFAS—with responsibility, accountability, and authority for the entire DIMHRS (Personnel/Pay) program and (2) that ensures that all stakeholder interests and positions are appropriately heard and considered during program reviews and before key program decisions; ensuring that the degree of consistency between DIMHRS (Personnel/Pay) and the evolving DOD-wide business enterprise architecture is continuously analyzed and that material inconsistencies between the two, both potential and actual, are disclosed at all program reviews and decision points and in program budget submissions, along with any associated system risks and steps to mitigate these risks; developing and implementing a DOD-wide, integrated master plan/schedule of activities that extends to all DOD program stakeholders; ensuring that all relevant acquisition management best practices associated with COTS-based systems are appropriately followed; and ensuring that an event-driven, risk-based approach that adequately considers factors other than the contract schedule continues to be used in managing DIMHRS (Personnel/Pay). In written comments on a draft of this report (reprinted in app. II), the Under Secretary of Defense for Personnel and Readiness stated that DOD largely agrees with the thrust of our recommendations, and that it is already following, to the extent practicable, the kind of acquisition best practices embodied in them. The department also made two overall comments about the report and provided a number of detailed comments pertaining to five of our six recommendations. The first overall comment was that our espousal of certain system acquisition management best practices resulted in incongruity among our recommendations. In particular, DOD indicated that our recognition that DOD is appropriately limiting modification of COTS products (a best practice) is incongruous with our recommendation that requirements be acceptable to user organizations (another best practice). It further stated that if it acted on all comments that it received on requirements from all sources, as it suggested we were recommending, then this would result in excessive modification to the COTS product. We do not agree with DOD’s points; we suggest that a careful reading of our recommendations would show that the department has not correctly interpreted and characterized those recommendations that pertain to this overall comment. Specifically, our report does not recommend that DOD act on all comments obtained from all sources, regardless of the impact and consequences of doing so. Rather, the report contains complementary recommendations for ensuring that the system requirements are acceptable to user organizations and discouraging changes to the COTS product unless the life-cycle costs and benefits justify making them. In short, our recommendations concerning system requirements are intended to provide DOD with the principles and rules that it should apply in executing a requirements-acceptance process that permits all stakeholder interests and positions to be heard, considered, and resolved in the context of what makes economic sense. While DOD’s comments note that a process was followed to screen out user inputs that, for example, necessitated changes to the COTS product, this process did not provide for the effective resolution of such inputs, as shown in our report by certain user organizations’ comments: specifically, that their involvement in defining detailed requirements was limited, that their comments on these requirements were not fully resolved, and that they were not willing to sign off on the requirements as sufficient to meet their needs. This lack of resolution is important because not attempting to obtain some level of stakeholder acceptance of requirements increases the risk that the system will not adequately meet users’ needs, that users will not adopt the system, and that later system rework will be required to rectify this situation. The second overall comment was that the department was already employing acquisition management best practices, to the extent practicable, and that the management process for the program is innovative and groundbreaking for DOD, going far beyond what is required by the department’s regulations. For example, the department commented that the system-requirements documentation far exceeds that which has been available for any other system effort. We do not dispute DOD’s comment about efforts on this system relative to other system acquisitions, because our review’s objectives and approach did not extend to comparing DIMHRS (Personnel/Pay) with other DOD acquisitions. However, our review did address DOD’s use of key acquisition management best practices on DIMHRS (Personnel/Pay), and in this regard we support the department’s recognition of the importance of these practices. In our report, we have provided a balanced message by recognizing instances where best practices were being followed, such as when DOD began tracing detailed system requirements to the system design following inquiries that we made during the course of our review. However, we do not agree that at the time we concluded our work DOD was following all relevant and practicable best practices; examples of these practices are cited in our report and discussed below in our response to DOD’s detailed comments on our individual recommendations. In its comments specific to our six recommendations, the department agreed without further comment with one recommendation (to develop and implement a DOD-wide, integrated master plan/schedule of activities that extends to all DOD program stakeholders). In addition, it either partially agreed or partially disagreed with our other five recommendations, and it provided detailed comments on each. Generally, DOD’s areas of disagreement relate to its view that it is already performing the activities that we recommend. DOD partially concurred with our recommendation to ensure that joint system requirements are complete and correct and acceptable to user organizations. In this regard, DOD stated that it has already taken great pains to ensure that the requirements are complete and correct, although its comments stated that this assurance has occurred “to the extent that any documentation this massive can be correct.” It also stated that the requirements are fully traceable to the system design, and that the high-level requirements were validated in accordance with DOD regulations. It added that it has taken various steps to gain users’ acceptance of the system, including a change management process, briefings, and prototype demonstrations. We do not disagree that DOD has taken important steps to meet the goals of requirements completeness and correctness. Likewise, we do not disagree that since receiving our draft report for comment, the department might have completed the important requirements-to-design traceability steps that it began in response to our inquiries, which we describe in our report. However, DOD’s comments contain no evidence to show that it has addressed the limitations in the requirements’ completeness and correctness that we cite in the report, such as those relating to the interface and data requirements, and they do not address the understandability issues we found relative to 77 percent of the detailed requirements. Moreover, DOD stated in its comments that its latest program review revealed 606 business process comments and 17 interface comments that it deemed noncritical, although it noted that they were still being analyzed. We also do not disagree that DOD has taken steps to gain user acceptance of the system. However, they did not gain acceptance of the detailed requirements that the system is to be designed to meet, which is the focus of our recommendation. As we point out in the report, not attempting to obtain agreement on the detailed requirements increases the risk that users will not adopt the system as developed and deployed, and that later system rework will be needed to address this. DOD partially concurred with our recommendation that it continuously analyze the degree of consistency between DIMHRS (Personnel/Pay) and the evolving DOD-wide BEA so that the risks of material inconsistencies are understood and addressed. In doing so, DOD stated that the DIMHRS (Personnel/Pay) requirements comprise the military personnel and pay portion of the architecture and that as one of the first major systems developed using all the principles of this architecture, DIMHRS (Personnel/Pay) is and will remain fully consistent with it. We do not agree with DOD’s comments that the system is consistent with the BEA. As we state in our report, DOD could not provide us with documented, verifiable analysis demonstrating this consistency and forming the basis for the DOD Comptroller’s April 2003 certification of this consistency. Rather, we were told that this certification was based on the DIMHRS (Personnel/Pay) program manager’s stated commitment to be consistent at some future point. However, as we note in our report, the real value of an architecture is that it provides the necessary context for guiding and constraining system investments in a way that promotes interoperability and minimizes overlap and duplication. Without it, expensive system rework is likely to be needed to achieve these outcomes. As we also note in our report, the absence of verifiable analysis of DIMHRS (Personnel/Pay) architectural compliance was in part due to the state of the BEA, which we have reported as not being well-defined and missing important content. Recognizing this, as well as the pressing need for DIMHRS’s (Personnel/Pay) promised capabilities, our recommendation calls for ongoing analysis of DIMHRS (Personnel/Pay) and the BEA to understand the risks of designing and developing the system outside the context of a well-defined architecture. DOD partially disagreed with our recommendation that it establish a DOD-wide governance structure in which responsibility, accountability, and authority for the entire program are vested in an executive-level organization or entity representing the interests of all program stakeholders. In doing so, the department described the roles, responsibilities, and authorities for various program stakeholders; however, it did not explain its reason for not agreeing with the recommendation, and only one of its comments bears relevance to our recommendation. Specifically, it commented that the Under Secretary of Defense (Personnel and Readiness) has full responsibility and accountability for the program. We do not agree. As we state in our report, DIMHRS (Personnel/Pay) is a DOD-wide program involving three distinct stakeholder groups whose respective chains of command do not meet at any point below the Secretary and Deputy Secretary of Defense levels. Thus, we concluded that responsibility, accountability, and authority for the program are diffused, with responsibility for developing functional requirements resting with the Joint Requirements and Integration Office, responsibility for system acquisition resting with the Joint Program Management Office, and responsibility for preparing for the transition to DIMHRS (Personnel/Pay) resting with 11 major end-user organizations. Under this structure, only the Joint Requirements and Integration Office is accountable to the Under Secretary, and the two distinct other stakeholder groups are not accountable to the Under Secretary. This means, as we state in the report, that no single DOD entity is positioned to exercise continuous leadership and direction over the entire program. The department also partially disagreed with our recommendation to follow all relevant acquisition management best practices associated with COTS-based systems. According to DOD’s comments, all of these best practices are currently being followed, including the three that we cite in our report as not being followed: (1) ensuring that plans explicitly provide for preparing users for the impact that the business processes embedded in the commercial components will have on their respective roles and responsibilities, (2) proactively managing the introduction and adoption of changes to how users will be expected to use the system to execute their jobs, and (3) ensuring that project plans explicitly provide for the necessary time and resources for integrating commercial components with legacy systems. In this regard, the department stated that the DIMHRS (Personnel/Pay) program had documented every change in current practices and policies that will be required for the military services, as well as future practices and policies, and that these were fully vetted through the functional user community. It also described a number of activities that it has undertaken to prepare and train users in the COTS product and other aspects of DIMHRS (Personnel/Pay). We do not dispute whether DOD has performed activities intended to facilitate the implementation of DIMHRS (Personnel/Pay). However, the best practices that we identified as not being followed, which form the basis of our recommendation, are focused on effectively planning for the full complement of activities that are needed to prepare an organization for the institutional and individual changes that COTS-based system solutions introduce. Such planning is intended to ensure, among other things, that key change management activities, including the dependencies among these activities, are defined and agreed to by stakeholders, including ensuring that adequate resources and realistic time frames are established to accomplish them. In this regard, DOD agreed in its comments that it does not have an integrated master plan/schedule for the program, which is an essential tool for capturing the results of the proactive change management planning that the best practices and our recommendation advocate. Both published research and our experience in evaluating the acquisition and implementation of COTS-based system solutions show that the absence of well-planned, proactive organizational and individual change management efforts can cause these system efforts to fail. The department partially disagreed with our last recommendation to adopt a more event-driven, risk-based approach to managing DIMHRS (Personnel/Pay) that adequately considers factors other than the contract schedule, stating that it is currently using an event-driven, risk-based approach and revising the schedule when necessary. We support DOD’s comment, as it indicates that DOD has decided to begin following such an approach. However, during the course of our work this was not the case. For example, we observed at that time that the DIMHRS (Personnel/Pay) program intended to accelerate its deployment schedule to meet an externally imposed deadline and that it was not until we raised concerns about the associated risks of doing so, as well as the absence of effective strategies to mitigate these risks, in an earlier draft of the briefing included in this report, that the department changed its plans. Also during the course of our work, we observed that program activities were truncated or performed concurrently in order to meet established deadlines. For example, as we describe in our report, data requirements (which are derived from higher-level information needs) were provided to the contractor before information needs were fully defined because the contractor needed these data requirements to complete the system design on schedule. It was this kind of focus on schedule that led to our recommendation to adopt a more event-driven, risk-based approach. However, in light of DOD’s comment that it intends to do so, we have slightly modified our recommendation to recognize this decision. All our recommendations are aimed at reducing the risk of failure on this important program, which we and DOD agree is critical to the department’s ability to effectively manage military personnel and pay. Furthermore, DOD’s comments show that it agrees with us on the importance of taking an approach to the program that is based on the kinds of management processes and structures that we recommend, and the department appears committed to following such an approach. Following our recommendations will help the department to do so and thereby avoid unnecessary risks. As we state in the report, careful consideration of the areas of concern that we raise is critical to improving the chances of timely and successful completion of the program. We are sending copies of this report to the House and Senate Armed Services and Appropriations Committees; the House Committee on Government Reform; the Senate Committee on Governmental Affairs; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your offices have any questions on the matters discussed in this report, please contact Randolph Hite at (202) 512-3439 or Gregory C. Wilshusen at (202) 512-3317; we can also be reached by e-mail at [email protected] or [email protected]. Other contacts and key contributors to this report are listed in appendix III. hundreds of supporting information technology (IT) systems, many of which perform the same tasks and store duplicate data; the need for manual data reconciliation, correction, and entry across these the large number of data translations and system interfaces. GAO, Financial Management: Defense’s System for Army Military Payroll Is Unreliable, GAO/AIMD-93-32 (Washington, D.C.: Sept. 30, 1993). GAO, Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems, GAO-04-911 (Washington, D.C.: Aug. 20, 2004), and Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems, GAO-04-89 (Washington, D.C.: Nov. 13, 2003). providing joint-theater commanders with accurate and timely human capital providing active service members, reservists, and National Guard members with timely and accurate pay and benefits, especially when they are performing in theaters of operation or combat; and providing an integrated military personnel and payroll system that uses standard data definitions across all services and service components, thereby reducing multiple data entries, system maintenance, pay discrepancies, and reconciliations of personnel and pay information. Among other things, the new system is also intended to support DOD’s efforts to produce accurate and complete financial statements. DOD plans to acquire and deploy DIMHRS in three phases: DIMHRS (Personnel/Pay)—military personnel hiring, promotion, retirement, etc., DIMHRS (Manpower)—workforce planning, analysis, utilization, etc.; and DIMHRS (Training). DOD accepted the design of the first system phase—DIMHRS (Personnel/Pay)—in November 2004 and is now proceeding with development of this system phase. Deployment to the Army and the Defense Finance and Accounting Service (DFAS) is to begin in the second quarter of fiscal year 2006, followed by deployment to the Air Force, Navy, and Marine Corps. 1. whether DOD has effective processes in place for managing the definition of the requirements for DIMHRS (Personnel/Pay) and 2. whether DOD has established an integrated program management structure for DIMHRS (Personnel/Pay) and is following effective processes for acquiring a system based on commercial software components. To accomplish our objectives, we interviewed officials from relevant organizations, analyzed program management documentation and activities, and reviewed relevant DOD analyses. Further details on our scope and methodology are given in attachment I to this appendix. Our work was performed from January through November 2004, in accordance with generally accepted government auditing standards. Results in Brief: Objective 1 Requirements Management DOD’s management of the DIMHRS (Personnel/Pay) requirements definition has recently improved, but key aspects of requirements definition remain a challenge. In particular, DOD has begun taking steps to ensure that the system requirements and the system design are consistent with each other. However, DOD has not ensured that the detailed requirements are complete and has not obtained user acceptance of the detailed requirements. The requirements definition challenges are attributable to a number of causes including the program’s overly schedule-driven approach and the difficulty of overcoming DOD’s long-standing cultural resistance to departmentwide solutions. These challenges increase the risk that the delivered system’s capabilities will not fully meet DOD’s needs. DOD does not have a well-integrated management structure for DIMHRS (Personnel/Pay) and is not following all relevant supporting acquisition management processes. In particular, program responsibility, accountability, and authority are diffused; the system has not been defined and designed according to a DOD-wide program stakeholders’ activities have not been managed according to a master plan/schedule that integrates all stakeholder activities; and the program is following some, but not all, best practices associated with acquiring business systems based on commercially available software. Without an integrated approach and effective processes for managing a program that is intended to be an integrated solution, DOD has increased the risk that the program will not meet cost, schedule, capability, and outcome goals. Results in Brief: Recommendations To assist DOD in effectively managing DIMHRS (Personnel/Pay), we are making six recommendations to the Secretary of Defense aimed at ensuring that DOD follows an integrated, coordinated, and risk-based program approach and thereby increases its chances of successfully delivering DIMHRS (Personnel/Pay). Roles and responsibilities Developing DOD integrated user requirements; providing expertise related to personnel/pay functional area. Acquiring the system, including managing the system development contractor, accepting the design; and testing and deploying the system. Assisting JR&IO in developing integrated requirements; assisting JPMO in addressing technical issues; taking other actions necessary for transition. Assisting JR&IO and JPMO in developing DIMHRS (Personnel/Pay); managing transition activities for own chain of command, including modifying existing systems and interfaces. Under Sec. of Defense (P&R) Overseeing personnel/pay functional area; resolving issues that cannot be resolved by the Executive Steering Committee. Executive Steering Committee Monitoring the program, resolving issues, and advising the Under Secretary (this is a stakeholder executive-level committee). Monitoring the program and resolving issues (this is a group of user representatives). Designated Milestone Decision Authority. Establishing acquisition policies and procedures in accordance with DOD directives and guidelines and for chartering the Program Executive Office, Information Technology (PEO-IT). Providing programmatic and technical direction to JPMO. $244.7 million obligated during fiscal years 1998 through 2003 and $356.6 million required for fiscal years 2004 through 2009. However, JR&IO and JPMO officials stated that these amounts do not include user organization costs; JPMO originally estimated these costs to be about $350 million, but it is now reevaluating these and other cost estimates as part of its efforts to update the program’s economic analysis. Additionally, the officials stated that the $601 million does not include JR&IO’s actual and estimated costs of $153 million through fiscal year 2009 for requirements definition activities, business process reengineering planning, enterprise architecture development, and other activities pertaining to management and analysis of the human resources domain. According to JR&IO officials, this $153 million consists of $72.5 million obligated during fiscal years 1998 through 2003 and $80.4 million required for fiscal years 2004 through 2009. Objective 1: Requirements Management DOD’s management of the DIMHRS (Personnel/Pay) requirements definition has recently improved, but key aspects of requirements definition remain a challenge. Requirements are the foundation for designing, developing, and testing a system. Our experience indicates that incorrect or incomplete requirements are a common cause of systems not meeting their cost, schedule, or performance goals. Disciplined processes and controls for defining and managing requirements are defined in published models and guides, such as the Capability Maturity Models developed by Carnegie Mellon University’s Software Engineering Institute (SEI), and standards developed by the Institute of Electrical and Electronics Engineers (IEEE). In managing DIMHRS (Personnel/Pay) requirements, DOD has begun taking steps to ensure that the system requirements and the system design are consistent with each other. However, DOD has not ensured that the detailed requirements are complete and has not obtained user acceptance of the detailed requirements. These challenges increase the risk that the delivered system’s capabilities will not fully meet DOD’s needs. See, for example, Karl E. Wiegers, Software Requirements (1999), p.15, and GAO, DOD Business Systems Modernization: Billions Continue to Be Invested with Inadequate Management Oversight and Accountability, GAO-04-615 (Washington, D.C.: May 27, 2004). Objective 1: Requirements Management Traceability An accepted way of ensuring the complete and accurate incorporation of requirements is to trace between levels of requirements and design documentation: Document (ORD) In system development, traceability is the degree to which a relationship is established between two or more products of the system development process. Traceability allows the user to follow the life of the requirement both forward and backward through system documentation from origin through implementation. Traceability is critical to understanding the parentage, interconnections, and dependencies among the individual requirements. This information in turn is critical to understanding the impact when a requirement is changed or deleted. Objective 1: Requirements Management Traceability The DIMHRS (Personnel/Pay) ORD defines the high-level capabilities for satisfying DOD’s mission needs. The ORD lists the functional processes, information needs, and performance parameters that the system is to support; an example of a functional process is “promote enlisted member personnel.” The detailed requirements include “use cases,” which are detailed descriptions of the activities that the system and the end users must perform and data needed to accomplish these activities. For example, the functional process “promote enlisted personnel” includes multiple use cases, such as “record enlisted member’s eligibility for promotion.” Each use case includes (1) business rules describing the processing steps for accomplishing the use case, such as steps for determining which members meet the time in grade/time in service requirement for promotion; (2) references to the applicable statutes, policies, guidance, or regulations that govern the use case; and (3) a list of the information needed to perform the use case, such as each person’s rank, occupation code, and promotion recommendation. In addition, the use cases incorporate process improvements that are to introduce efficiencies and to standardize personnel and pay processing DOD-wide. Objective 1: Requirements Management Traceability However, when DOD accepted the first two parts of the system design, it had not traced between the detailed requirements and the design. Rather, DOD required the contractor to base the system design only on the high-level requirements defined in the ORD, and DOD provided detailed requirements for information only purposes. According to JPMO officials, the contract was written in this way to provide the contractor with maximum flexibility to design the system according to the capabilities of the COTS product and thereby reduce system development and maintenance costs. Nonetheless, JR&IO officials also stated that the detailed requirements for DIMHRS (Personnel/Pay) should be the basis of the system design. Objective 1: Requirements Management Traceability In addition, DOD did not, until recently, trace between ORD requirements and detailed requirements. As a consequence, we told DOD during the course of our review that relevant financial and accounting standards were missing from the detailed requirements, even though they had been included in the ORD. JFMIP is a cooperative undertaking of the Treasury Department, Office of Management and Budget, Office of Personnel Management, GAO, and others to improve financial management practices in government. The PMO, managed by the Executive Director of the JFMIP, using funds provided by the Chief Financial Officers Council agencies, is responsible for the testing and certification of COTS core financial systems for use by federal agencies and coordinating the development and publication of functional requirements for financial management systems. JFMIP, JFMIP Human Resources & Payroll Systems Requirements, JFMIP SR-99-5 (April 1999). Objective 1: Requirements Management Traceability The JFMIP PMO standard is important because it contains requirements associated with producing accurate and complete payroll data for the financial statements, which is relevant to DOD’s efforts to obtain “clean,” or “unqualified,” audit opinions on its financial statements.8 An example of the JFMIP PMO requirements is functionality to process prior period, current, and future period pay actions, on the basis of effective dates. A clean, or unqualified, opinion is given when an auditor deems the financial statements to be accurate and complete, with no qualifying statements. Objective 1: Requirements Management Traceability After we told DOD of the missing requirements, JR&IO officials undertook an analysis of the 196 JFMIP PMO human resources and payroll requirements9 and have stated that this analysis has allowed them to ensure that DIMHRS (Personnel/Pay) will meet 170 of the 196 requirements and that the remaining requirements are not applicable to military human resource and payroll systems. They also stated that all applicable requirements are now documented in the DIMHRS (Personnel/Pay) detailed requirements database. Objective 1: Requirements Management Traceability According to JPMO officials, tracing the detailed financial requirements to the design was not done sooner because the COTS software product being used is certified as JFMIP PMO compliant. However, JFMIP PMO certification extends only to the core financial module of this software (i.e., general ledger, funds control, accounts receivable, accounts payable, cost management, and reporting); it does not include the two modules used for DIMHRS (Personnel/Pay)—human resources and payroll. Objective 1: Requirements Management Traceability In addition, as we have reported, even if JFMIP PMO had certified the human resources and payroll modules of the COTS software product, certification by itself does not ensure that systems based on this software will be compliant with the goals of the Federal Financial Management Improvement Act, as JFMIP has made clear, and does not ensure that systems based on this software will provide reliable, useful, and timely data for day-to-day management.10 Other important factors affecting compliance with federal financial management system requirements and the effectiveness of an implemented COTS system include how the software package has been configured to work in the agency’s environment, whether any customization is made to the software, the success of converting data from legacy systems to new systems, and the quality of transaction data in the feeder systems. GAO, Financial Management: Improved Financial Systems Are Key to FFMIA Compliance, GAO-05-20 (Washington, D.C.: Oct. 1, 2004), and Business Modernization: NASA’s Integrated Financial Management Program Does Not Fully Address Agency’s External Reporting Issues, GAO-04-151 (Washington, D.C.: Nov. 21, 2003). Objective 1: Requirements Management Traceability To their credit, JR&IO and JPMO officials have begun tracing between the detailed requirements and the design, including the financial standards. As of late November 2004, they told us that this tracing had identified about 630 discrepancies that may require modification to the detailed requirements or the design. They stated that they plan to complete this tracing by the end of February 2005. Until DOD completes tracing both backward (from the design back to the detailed requirements and the ORD) and forward (from the ORD forward to the detailed requirements and the design), the risk is increased that the requirements and design are not complete and correct. Objective 1: Requirements Management Content of Requirements Detailed requirements are missing important content and are difficult to understand. According to SEI, requirements should be complete, correct, clear, and understandable; IEEE standards state that requirements should be communicated in a structured manner to ensure that the customers (i.e., end users) and the system’s developers reach a common understanding of them. For DIMHRS (Personnel/Pay), certain requirements are missing from the detailed requirements. Specifically, the interface requirements remain incomplete, and questions exist as to the completeness of the data requirements. Finally, some of the use cases that provide the detailed requirements are unclear and ambiguous, making them difficult to understand. Each of these three areas is discussed in greater detail below. If requirements are not complete and clear, their implementation in the system design may not meet users’ needs, and it will be unnecessarily difficult for DOD to test the system effectively and determine whether system requirements have been met. Objective 1: Requirements Management First, the requirements for the interfaces between DIMHRS (Personnel/Pay) and existing systems are not yet complete. According to SEI, requirements for internal and external interfaces should be sufficiently defined to permit these interfaces to be designed and interfacing systems to be modified. For example, DIMHRS (Personnel/Pay) will be required to interface with DOD’s accounting systems and other systems, such as DOD’s travel system, either by providing DIMHRS (Personnel/Pay) data for these systems or by receiving accounting data from them. DIMHRS (Personnel/Pay) interfaces must also be designed to ensure compliance with applicable JFMIP PMO financial system requirements and applicable federal accounting standards. These interface requirements must be completed before the DIMHRS (Personnel/Pay) system can be fully deployed. To complete the interface requirements, officials representing JPMO and the user organizations’ DIMHRS offices told us that they must identify which of the legacy systems will be partially replaced, and thus will require modification in order to interface with the new system. JPMO officials stated that although DOD accepted the system design in November 2004, a significant amount of work remained to fully address DIMHRS (Personnel/Pay) interface issues by the user organizations. Objective 1: Requirements Management Content of Requirements Second, the data requirements initially provided to the contractor for the system design had not been aligned with the users’ information needs that were included in the detailed requirements. According to SEI, the data required to meet users’ information needs must be defined so that the system can be properly designed and developed. However, JR&IO officials told us that they had not fully defined information needs when users were asked to identify the data requirements, along with the legacy systems that are the best sources of the required data. The contractor needed these data requirements to complete the system design on schedule. DOD recently began comparing the data requirements provided to the contractor with the users’ information needs developed by JR&IO. JR&IO officials stated that they expect to complete this work in February 2005. Until this task is completed, DOD will not know whether revisions will be needed to the system design to ensure that users’ information needs are met and that the correct data are later migrated to the new system. JPMO officials also stated that when DOD accepted the system design in November 2004, a significant amount of work remained for the user organizations to fully address DIMHRS (Personnel/Pay) data issues. Objective 1: Requirements Management Content of Requirements Third, some of the detailed requirements are unclear and ambiguous, making them difficult to understand. According to SEI, requirements should be clear and understandable. When requirements are ambiguous, their meaning is open to varying interpretations, which increases the risk that the implementation of the requirements will not meet users’ needs. We reviewed a random sample of the documentation for 40 of 424 use cases (detailed requirements): 20 of 284 pay use cases and 20 of 140 personnel use cases. Our review showed that this documentation did not consistently provide a clear explanation of the relationships among the parts of each requirement (business rules, information requirements, and references) or adequately identify the sources of data required for computations. Based on our sample, an estimated 22 percent of use cases cite “P&R guidance” (that is, guidance from the Office of the Under Secretary for P&R) as a reference to support the need for a business rule, either alone or along with references to DOD policies.11 According to JR&IO officials, this note indicates that the business rule includes steps not currently required by DOD’s policies, which have been added either to take advantage of “out-of-the-box” COTS capabilities or to implement a best practice. However, when P&R guidance is cited, the use cases do not explain whether an out-of-the-box capability or best practice is intended. This estimate is a weighted average of the sample results for the two categories of use cases shown in the table on slide 34. A weighted average is used because the population of pay use cases was sampled at a rate different from the population of personnel use cases. Objective 1: Requirements Management Content of Requirements In addition, when both P&R guidance and existing policies are cited, the use case does not explain which rules are based on P&R guidance and which are based on existing policies. These ambiguities make it difficult for stakeholders to understand the business rule and its rationale. (This point is further discussed in the following section on end users’ acceptance of requirements.) According to JR&IO officials, such ambiguity was resolved via communication between JR&IO and JPMO officials followed by JPMO officials’ communicating with the contractor. Estimates of the extent of use case problems and associated confidence intervals are summarized on the next slide. Objective 1: Requirements Management Content of Requirements JR&IO officials agreed that the clarity of the use cases could be improved but stated that the use cases provide a greater level of detail than DOD normally provides for a COTS-based system. JR&IO officials added that they developed the use cases to support the design and development of the system rather than to communicate the detailed requirements to the user organizations. In this regard, officials representing the DIMHRS (Personnel/Pay) development and integration contractor stated that the use cases are providing useful information for designing the DIMHRS (Personnel/Pay) system. Objective 1: Requirements Management DOD has not obtained user acceptance of detailed requirements. According to SEI, users’ needs and expectations must be used in defining requirements. Furthermore, according to our guidance, when business process changes are planned, users’ needs and expectations must be addressed, or users may not accept change, which can jeopardize the effort.12 One way to ensure and demonstrate user acceptance of requirements is to obtain sign-off on the requirements by authorized end-user representatives. To its credit, JR&IO obtained the user organizations’ formal acceptance of the ORD. However, the process used to define the detailed requirements (specifically, the use cases) has not resulted in user acceptance. End-user representatives stated that their involvement in the definition of the use cases was limited. End users’ comments on the use cases were not fully resolved. End users are not being asked to approve the detailed requirements. Each of these three issues is discussed in greater detail below. GAO, Business Process Reengineering Assessment Guide Version 3, GAO/AIMD-10.1.15 (Washington, D.C.: May 1997). Objective 1: Requirements Management End User Acceptance First, JR&IO developed the use cases with the help of contractors and representative end users (personnel and payroll specialists) from the end-users’ stakeholder organizations. However, according to these representatives, their role in defining the use cases was limited. They stated that they principally performed research, identified references, and explained how their legacy environments currently process personnel and pay transactions and that they had limited influence in deciding the content of the use cases. In response, JR&IO officials stated that many of the user representatives were lower-level personnel who were not empowered to represent their components in making decisions about requirements. Objective 1: Requirements Management End User Acceptance Second, to obtain users’ comments on the use cases, JR&IO provided the end- users’ organizations with the use cases and other documentation for comment, but this process did not resolve all of the comments. An initial set of use cases was reviewed by hundreds of individual end users (personnel and payroll specialists), resulting in thousands of comments. Around October 2003, JR&IO provided a second set of use cases to the end users (as well as to the development and integration contractor), including modifications to reflect changes suggested by users based on their first set of comments. The end users provided about 7,000 comments on the second set of use cases. Objective 1: Requirements Management End User Acceptance In March 2004, JR&IO established a baseline version of the use cases and provided this version to the development and integration contractor in order to meet the contractor’s schedule. At that time, JR&IO had concurred with about 400 of the 7,000 comments and modified the use cases in response, but it had not completed its analysis of all 7,000 comments. JR&IO then established a change control process for making further modifications to the baseline use cases. According to JR&IO officials, as of the end of October 2004, 703 change requests had been submitted: 163 had been approved, 48 had been disapproved, and the remaining 492 were still under review. Objective 1: Requirements Management End User Acceptance According to end-user representatives from each of the services, the use cases were difficult to understand because they were shared in a piecemeal fashion and did not include sufficient detail. Furthermore, they said that JR&IO responses to comments were generally brief and often did not provide sufficient explanation—for example, “The Business Rule captures this requirement. Action: No change required…” and “The comment is out of scope. Action: No action required.” As a result, the end-user representatives stated that they often did not understand the reasoning behind the decisions. JR&IO officials stated that users might have had difficulty with understanding the use cases because they were defined in terms of what processes the system would perform as opposed to how the processes would be performed. This approach is consistent with best practices (as discussed later in this briefing) and with JR&IO’s stated intention to discourage the definition of processes in terms of existing systems and processes, as well as to allow the development of reengineered joint processes using the native capabilities of the COTS software to the maximum extent. However, best practices also require that explanations of business rules and their rationale be complete and understandable by end users. Objective 1: Requirements Management End User Acceptance JR&IO officials further stated that many of the end users’ comments were not substantive (e.g., a minor change needed in the citation of a regulation); many were duplicative, and some addressed issues outside of personnel/pay functionality, such as training and manpower. In addition, most were not prioritized. Objective 1: Requirements Management End User Acceptance Third, JR&IO does not intend to gain formal agreement on the detailed requirements from the end-users’ organizations, although it did obtain such agreement on the ORD. JR&IO officials stated that gaining formal agreement from some of the users’ organizations would delay the program and be impractical because of some user organizations allegiance to their legacy systems and processes. GAO, Defense IRM: Poor Implementation of Management Controls Has Put Migration Strategy at Risk, GAO/AIMD-98-5 (Washington, D.C.: Oct. 20, 1997). GAO, Information Technology: DOD's Acquisition Policies and Guidance Need to Incorporate Additional Best Practices and Controls, GAO-04-722 (Washington, D.C.: July 30, 2004). Nevertheless, not attempting to obtain agreement on DIMHRS (Personnel/Pay) requirements increases the risk that users will not accept and use the developed and deployed system, and that later system rework will be required to make it function as intended and achieve stated military human capital management outcomes. Officials representing the DIMHRS offices are not in full agreement with JR&IO officials on the state of the requirements, as the following slides show. Objective 1: Requirements Management End User Acceptance Officials representing each of the DIMHRS management offices (Army, Air Force, Navy, Marine Corps, and DFAS) stated that their organizations are not currently willing to sign off on the DIMHRS (Personnel/Pay) detailed requirements as being sufficient to meet their organizations’ military personnel and pay needs. These officials stated that they do not yet know what the gaps are between the functionality provided by their current systems and the functionality to be provided by DIMHRS (Personnel/Pay). Officials representing the Army’s DIMHRS office stated that they do not yet know whether the requirements are adequate to enable the Army to replace a number of its existing systems with DIMHRS (Personnel/Pay). Officials representing the Air Force’s DIMHRS office stated that the requirements are defined at a very high level and are subject to interpretation, and as a result, they are unable to determine whether DIMHRS (Personnel/Pay) will meet all the Air Force’s requirements. Objective 1: Requirements Management End User Acceptance Officials representing the Navy’s DIMHRS office stated that their concerns about the adequacy of the detailed requirements relate principally to the issue of not knowing which of the functions provided by the Navy’s legacy systems will not be provided by DIMHRS (Personnel/Pay). Officials representing the Marine Corps’ DIMHRS office stated that they do not believe that the requirements adequately address service specificity or the automation of manual processes. These officials stated that the Marine Corps has not been able to determine what functionality the system contains versus the functionality contained in its legacy systems. They stated that this information is needed to enable them to modify legacy systems to ensure that needed functions continue to be provided to service members until these functions are incorporated into DIMHRS (Personnel/Pay). Officials representing DFAS’s DIMHRS office stated that they do not believe the requirements adequately address a number of pay, accounting, and personnel issues. According to JR&IO officials: DIMHRS (Personnel/Pay) will provide all the functionality provided by the services’ and DFAS’ legacy personnel and pay systems. JR&IO stated that the defined requirements provide enough information to determine what the system will do but acknowledged that understanding exactly how the system would perform its functions was not possible until the system was fully designed. Owners of end users’ legacy systems are generally not supportive of DIMHRS (Personnel/Pay) because they want to preserve their autonomy in the development and control of their own systems. Support for the system by the services’ executives is mixed. For example, JR&IO said that (1) Army executives are committed to implementing and using the DIMHRS (Personnel/Pay) system because they believe it will address many problems that the Army currently faces, (2) Air Force officials are generally supportive of the system but say they do not yet know whether the system will meet all their needs, and (3) Navy and Marine Corps executives are not as supportive because they are not fully convinced that DIMHRS (Personnel/Pay) will be an improvement over their existing systems. Objective 1: Requirements Management End User Acceptance A number of actions have been taken to reduce the risk that users will not accept the system, including conducting numerous focus groups, workshops, demonstrations, and presentations explaining how the DIMHRS (Personnel/Pay) system could address DOD’s existing personnel/pay problems. Although JPMO officials told us that a consensus of the users’ organizations agreed with the decision to accept the contractor’s design of DIMHRS (Personnel/Pay) in November 2004, they submitted 391comments and issues on the design. JPMO officials stated that they expect to resolve these comments and issues by the end of February 2005. Objective 2: Program Management DOD does not have a well-integrated management structure for DIMHRS (Personnel/Pay) and is not following all relevant supporting acquisition management processes. In particular, program responsibility, accountability, and authority are diffused; the system has not been defined and designed according to a DOD-wide integrated enterprise architecture;15 program stakeholders’ activities have not been managed according to a master plan/schedule that integrates all stakeholder activities; and the program is following some, but not all, best practices associated with acquiring business systems based on commercially available software. Without a well-integrated approach and effective processes for managing a program that is intended to be an integrated solution, DOD has increased the risk that the program will not meet cost, schedule, capability, and outcome goals. JPMO officials stated that the system has not been defined and designed according to a DOD-wide integrated enterprise architecture because the enterprise architecture is not complete. Program responsibility, accountability, and authority are diffused. Research shows that leading organizations ensure that programs are structured to ensure that assigned authorities, responsibilities, and accountabilities are clear and aligned under the continuous leadership and direction of a single entity. For DIMHRS (Personnel/Pay), these areas are spread among three key stakeholder groups whose respective chains of command do not meet at any point below the Secretary and Deputy Secretary of Defense level. Responsibility for requirements definition rests with JR&IO, which is accountable through one chain of command. Responsibility for system acquisition rests with JPMO, which is accountable through another chain of command. Responsibility for preparing for transition to DIMHRS (Personnel/Pay) rests with the end users’ organizations—11 major DOD components reporting through five different chains of command. The organization chart on the next slide shows the chain of command and the coordination relationships among the primary DIMHRS (Personnel/Pay) stakeholder groups. Assist. Reserves) Reserves) Reserves) Reserves) (R,D,& A) As the chart also shows, the services and DFAS have DIMHRS (Personnel/Pay) management offices to assist JR&IO and JPMO and to represent their respective end-user communities (the pay and personnel specialists). In addition, various coordination and advisory bodies have been established. The three primary stakeholder groups (JR&IO, JPMO, and the end users) are accountable to three different groups of executives. JR&IO is ultimately accountable to the Under Secretary for P&R, who is the department’s Principal Staff Assistant (PSA)16 for personnel and compensation and is responsible for oversight of the DIMHRS (Personnel/Pay) program from a functional perspective. The PSA is the executive-level manager responsible for the management of defined functions within DOD. According to JR&IO officials, the separation of the functional and acquisition lines of authority is a normal DOD practice. The end users are ultimately accountable to the Offices of the Secretaries of the Army, Air Force, and Navy (in coordination with the Commandant of the Marine Corps) and the DOD Comptroller, who have ultimate responsibility for implementing and using DIMHRS (Personnel/Pay). See, for example, GAO/AIMD-98-5. Although no stakeholder organization has continuous programwide oversight purview and visibility, the DIMHRS (Personnel/Pay) Executive Steering Committee is made up of representatives of each of the entities that has ultimate responsibility for the program. According to DOD, the committee monitors the program, resolves issues that are brought before it, and advises the Under Secretary (P&R).19 It meets quarterly or when assembled by the chair—the Deputy Under Secretary of Defense for Program Integration—who reports to the Under Secretary (P&R). DIMHRS (Pers/Pay) Report to Congress (June 2002). According to JR&IO officials, the diffusion of program accountability, responsibility, and authority will be reduced in fiscal year 2005, when funding for both JR&IO and JPMO will be consolidated and centrally managed by JR&IO.20 However, the end- user organizations will continue to separately control their respective funds. For example, officials at the Army’s DIMHRS (Personnel/Pay) management office estimated that the Army’s DIMHRS (Personnel/Pay) funding needs will range from $27 million to $43 million a year, but they said that the Army is unlikely to fund the program at that level because of other priorities. Without a DOD-wide integrated governance structure that vests an executive-level organization or entity representing the interests of all program stakeholders with responsibility, accountability, and authority for a joint or integrated program like DIMHRS (Personnel/Pay), DOD runs the risk that the program will not produce an integrated set of outcomes. According to JR&IO officials, DOD requested consolidated JR&IO and JPMO funding for DIMHRS (Personnel/Pay) for fiscal year 2005. GAO, Information Technology: A Framework for Assessing and Improving Enterprise Architecture Management (Version 1.1), Executive Guide, GAO-03-584G (Washington, D.C.: April 2003). Bob Stump National Defense Authorization Act for Fiscal Year 2003, Pub. L. No. 107-314, section 1004, 116 Stat. 2458, 2629–2631 (Dec. 2, 2002). Acquiring and implementing DIMHRS (Personnel/Pay) without an enterprise architecture increases the risk that DOD will make a substantial investment in system solutions that will not be consistent with its eventual blueprint for business operational and technological change. Recognizing this, the Deputy Secretary of Defense issued a memorandum in March 2004 requiring the development and implementation of architectures for each of DOD’s six business domains, including the human resources domain. These business domains, according to the department’s modernization program, are delegated the “authority, responsibility, and accountability … for their respective business areas” for implementing business transformation,23 including the following: “Leading the business transformation within the Domain.” “Managing its respective portfolio to ensure implementation of and compliance with the Business Enterprise Architecture (BEA) and transition plan.” “Assisting in the extension of the BEA” for the domain. DOD Business Management Modernization Program, Governance Approach. http://www.dod.mil/comptroller/bmmp/pages/govern_dod.html. The Under Secretary (P&R), who is the domain owner for the human resources domain, assigned JR&IO the responsibility for extending the BEA for the human resources domain. According to JR&IO officials, the development of the human resources portion of the BEA is being done concurrently with the acquisition and deployment of DIMHRS (Personnel/Pay). Recognizing the importance of managing the concurrency of such activities and ensuring that DOD’s ongoing investments are pursued within the context of its evolving BEA, the National Defense Authorization Act for Fiscal Year 2003 also required that system improvements with proposed obligations of funds greater than $1 million be reviewed to determine if they are consistent with the BEA. To satisfy this requirement, JPMO officials presented the DOD Office of the Comptroller, which is developing the BEA, with information on DIMHRS (Personnel/Pay) compliance with version 1.0 of the BEA in April 2003. However, according to our review of the information used by JPMO in April 2003 to obtain an architectural compliance determination, this information did not include a documented, verifiable analysis demonstrating such compliance. In the absence of such analysis, the JPMO program manager instead made commitments that DIMHRS (Personnel/Pay) would be consistent with the architecture. On the basis of this commitment, the DOD Comptroller certified in April 2003 that DIMHRS (Personnel/Pay) is consistent with the BEA. Later, JPMO included in the DIMHRS (Personnel/Pay) contract a requirement that the systems specification be compatible with the emerging BEA. According to JR&IO officials, the April 2003 architectural certification is preliminary, and further certification is needed. They stated that DIMHRS (Personnel/Pay) will undergo another certification before the system deployment decision. By this time, however, lengthy and costly DIMHRS (Personnel/Pay) design and development work will be completed. The real value in having and using an architecture is knowing, at the time that extensive system definition, design, and development are occurring, what the larger blueprint for the enterprise is, so that definition, design, and development can be guided and constrained by this frame of reference. Aligning to the architecture after the system is designed could also require expensive system rework to address any inconsistencies with the architecture. The absence of verifiable analysis of compliance was in part due to the state of completeness of BEA version 1.0. As we reported in September 2003, this version was missing key content, including sufficient depth and detail to effectively guide and constrain system investments.24 Since then, DOD has issued other versions of the BEA. However, we reported in May 2004 that version 2.0 of the BEA still did not include many of the key elements of a well-defined architecture.25 For example, the “to be” environment did not provide sufficient descriptive content related to future business operations and supporting technology to permit effective acquisition and implementation of system solutions and associated operational change. DOD since issued versions 2.2 and 2.2.1 in July and August 2004, respectively. GAO, DOD Business Systems Modernization: Important Progress Made to Develop Business Enterprise Architecture, but Much Work Remains, GAO-03-1018 (Washington, D.C.: Sept. 19, 2003). GAO, DOD Business Systems Modernization: Limited Progress in Development of Business Enterprise Architecture and Oversight of Information Technology Investments, GAO-04-731R (Washington, D.C., May 17, 2004). DIMHRS (Personnel/Pay) program stakeholder activities are not being managed according to an integrated master plan/schedule. IEEE standards state that a master plan/schedule should be prepared and updated throughout the system’s life cycle to establish key events, activities, and tasks across the program, including dependencies and relationships among them. A properly designed master plan/schedule should allow for the proper scheduling and sequencing of activities and tasks, allocation of resources, preparation of budgets, assignment of personnel, and criteria for measuring progress. making revisions to their regulations to ensure consistency with the reengineered business rules designed into DIMHRS (Personnel/Pay) that differ from existing DOD or service rules and policies (e.g., those noted earlier in this briefing as being in accordance with “P&R guidance” in the use cases). With a plan/schedule that focuses on the contractor’s and JPMO’s activities and does not extend to all DOD program stakeholders’ activities, the risk increases that key and dependent events, activities, and tasks will not be performed as needed, which in turn increases the risk of schedule slippage and program goal shortfalls. GAO-04-722. ensuring that plans explicitly provide for preparing users for the impact that the business processes embedded in the commercial components will have on their respective roles and responsibilities, proactively managing the introduction and adoption of changes to how users will be expected to use the system to execute their jobs, and ensuring that project plans explicitly provide for the necessary time and resources for integrating commercial components with legacy systems. To its credit, DOD is following the first three of these practices for DIMHRS (Personnel/Pay), but its is not following the last three. For example, program officials told us that they expected the contractor to base the system design on the high-level requirements defined in the ORD as a way to maximize the contractor’s ability to leverage the COTS product. Furthermore, the contract includes award fees that give the contractor incentives to, among other things, minimize customization of the COTS software. However, DOD does not have an integrated program plan/schedule that provides for end-user organization activities that are associated with preparing users for the operational and role-based changes that the system will introduce, such as the need to revise the duties that are now performed by pay specialists and personnel specialists. Furthermore, DOD’s program plans do not recognize the end-user organizations’ time and resource needs associated with integrating DIMHRS with their respective legacy systems, and JPMO is not actively managing these end- user operational changes. Although JR&IO officials told us that some planning has occurred to position end users for DIMHRS (Personnel/Pay) changes, officials representing the DIMHRS offices in the services and DFAS stated that these plans do not adequately address the above areas. By not following all relevant best practices associated with acquiring COTS-based systems, DOD is increasing the risk that DIMHRS (Personnel/Pay) will not be successfully implemented and effectively used. ensuring that all relevant acquisition management best practices associated with COTS-based systems are appropriately followed; and adopting a more event-driven, risk-based approach to managing DIMHRS that adequately considers factors other than the contract schedule. Our objectives were to determine 1. whether the Department of Defense (DOD) has effective management processes in place for managing the definition of the requirements for the Defense Integrated Military Human Resources System (DIMHRS (Personnel/Pay)) and 2. whether DOD has established an integrated program management structure for DIMHRS (Personnel/Pay) and is following effective processes for acquiring a system based on commercial software components. To determine industry and government best practices and regulations for effective requirements definition and management, we evaluated criteria from the Capability Maturity Models (CMM) developed by Carnegie Mellon University’s Software Engineering Institute and standards developed by the Institute of Electrical and Electronics Engineers (IEEE), as well as the DOD 5000 series and other applicable DOD policies and regulations, federal accounting standards, and prior GAO reports and best practices guidance. Objectives, Scope, and Methodology discussed the use cases with selected personnel and pay specialists and reviewed end users’ written comments to JR&IO on the use cases; estimated the extent of several problems by evaluating the clarity and understandability of use cases in a probability sample27 of 20 of 284 pay use cases and 20 of 140 personnel use cases; and compared requirements management activities with relevant industry and government guidance and requirements, including CMM and IEEE, and the DOD 5000 series and Joint Chiefs of Staff regulations. A different probability sample of use cases could produce different estimates. In this briefing, we present estimates along with the 95 percent confidence intervals for these estimates. This means that there is a 95 percent probability that the actual value for the entire population is within the range defined by the confidence interval. In other words, if 100 different samples were taken, in 95 of those 100 samples, the actual value for the entire population would be within the range defined by the confidence interval, and in 5 of those 100 samples, the value would be either higher or lower than the range defined by the confidence interval. To evaluate program management structures and processes, we interviewed officials from JR&IO, JPMO, and the DIMHRS offices for each of the analyzed DOD and DIMHRS (Personnel/Pay) program management and process management documentation and activities, including charters, process descriptions, budgets, and program plans, etc.; and reviewed relevant analysis supporting program decisions, such as economic justification and architectural alignment. We determined that the data used in this report are generally reliable for the purposes for which we used them. For DOD-provided data, we have made appropriate attribution to indicate the data’s source. We performed our work at DOD headquarters; JR&IO, in the Washington, D.C., area; JPMO in New Orleans, Louisiana; the Army’s, Navy’s, Air Force’s, and Marine Corps’ DIMHRS offices; and DFAS’s offices in the Washington, D.C., area. This work was performed from January through November 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Defense’s letter dated January 25, 2005. 1. The Department of Defense’s (DOD) characterization of our objectives is not correct. As stated in our report, our objectives were to determine whether DOD had effective processes in place for managing the definition of requirements for the Defense Integrated Military Human Resources System (DIMHRS) (Personnel/Pay) and whether it established an integrated program management structure and followed effective processes for acquiring a system based on commercial software components. Accordingly, we assessed the processes used to manage DIMHRS (Personnel/Pay), and the content of the requirements, against relevant best practices, many of which are embodied in DOD and federal policies and guidance, and against federal guidance, federal accounting standards, and prior GAO reports. 2. We do not believe that our finding that DOD is appropriately limiting modification of commercial, off-the-shelf (COTS) products (a best practice) is incongruous with our recommendation that requirements be acceptable to user organizations (another best practice). Furthermore, our report does not recommend that DOD act on all comments regardless of impact. Our recommendations concerning system requirements are intended to provide DOD with the principles and rules that it should apply in executing a requirements-acceptance process that permits all stakeholder interests and positions to be heard, considered, and resolved in the context of what makes economic sense. Furthermore, our report makes complementary recommendations that discourage changes to COTS products unless fully justified on the basis of life-cycle costs, benefits, and risks. Finally, while we do not dispute whether DOD has followed a process to screen out comments that would have necessitated COTS modification, DIMHRS (Personnel/Pay) users said that this process did not allow for effective resolution of the comments, which is the basis for our recommendation aimed at gaining user acceptance of requirements. 3. We have not concluded that DOD had not done enough to ensure that all stakeholders have had full input to the requirements. Our conclusion was that DOD had not obtained user acceptance of the detailed requirements, and that this choice entails risks. 4. We disagree. Our report neither states nor suggests that DOD act on all comments that it receives on requirements from all sources. Also, see comment 2. 5. See comment 10. 6. We acknowledge DOD’s implementation of certain best practices as noted in our report. However, at the time we concluded our work, DOD was not following all relevant and practicable best practices, as we discuss in our report. 7. We do not dispute DOD’s comment about efforts on DIMHRS (Personnel/Pay) relative to other system acquisitions because our review’s objectives and approach did not extend to comparing the two. See comment 1 for a description of our objectives. Furthermore, while it is correct that DOD’s regulations only require stakeholder agreement with the Operational Requirements Document, our evaluation was not limited to whether DOD was meeting its own policy; we also evaluated whether DOD’s processes were consistent with industry and government best practices. 8. We do not disagree that DOD has taken important steps to meet the goals of requirements completeness and correctness, and we do not have a basis for commenting on whether the department might have completed important requirements-to-design traceability steps since we completed our work. However, as we state in our report, these tracing steps began in response to the inquiries we made during the course of our review. Furthermore, DOD’s comments contain no evidence to show that it has addressed the limitations in the requirements’ completeness and correctness that we cite in the report, such as those relating to the interface and data requirements, and they do not address the understandability issues we found relative to an estimated 77 percent of the detailed requirements. Moreover, DOD even stated in its comments that its latest program review revealed 606 business process comments and 17 interface comments that it deemed noncritical, although it noted that they were still being analyzed. 9. We do not dispute DOD’s position that the Joint Requirements Oversight Council’s validation of the Operational Requirements Document is all that is required by DOD regulation, and we do not have a basis for commenting on whether its documentation of requirements for DIMHRS (Personnel/Pay) was innovative and unprecedented. Our review objective relating to requirements, as stated in our report, was to address whether DOD has effective processes in place for managing the definition of the requirements. To accomplish this objective, and as also stated in our report, we analyzed DOD’s requirements management efforts against recognized best practices. 10. We do not disagree that DOD has taken numerous steps to gain user acceptance of the system. However, as we point out in the report, user organizations still had questions and reservations concerning the requirements. Not adequately resolving these issues, and thereby gaining user acceptance of requirements, increases the risk that a system will be developed that does not meet users’ needs, that users will not adopt the developed and deployed system, and that later system rework will be required to rectify this situation. 11. We do not question that DOD has reviewed the detailed requirements since we completed our review. However, we challenge DOD’s comment that all questions have been resolved for two reasons. First, DOD’s comments contain no evidence to show that it has addressed the limitations in the requirements’ completeness and correctness that we cite in the report, such as those relating to the interface and data requirements, or the understandability issues we found relative to an estimated 77 percent of the detailed requirements. Second, in its comments, DOD acknowledges that 606 questions remain regarding requirements and design issues. 12. We agree that the detailed requirements are not the sole vehicle for gaining user acceptance of the system. Rather, they are one vehicle to be used in a continuous process to ensure acceptability of a system to end users. Industry and government best practices advocate users’ understanding and acceptance of requirements, and these practices are not limited to high-level requirements descriptions, but rather apply to more detailed requirements descriptions as well. 13. We do not dispute that the roles and responsibilities of the Executive Steering Committee are defined and documented. In fact, we cite the committee’s responsibilities in our report. 14. We agree that DOD has forums and processes for communicating stakeholder interests. However, we do not agree that these have provided for effective resolution of concerns and comments, as we describe in our report. Also, see comment 10. 15. We disagree. In our view, any entity, whether it is an individual, office, or committee, can have responsibility, accountability, and authority for managing a program. Moreover, we intentionally worded the recommendation so as not to prescribe what entity should fulfill this role for DIMHRS (Personnel/Pay). Rather, our intent was to ensure that such an entity was designated and empowered. 16. We disagree. DIMHRS (Personnel/Pay) is a DOD-wide program involving three distinct stakeholder groups whose respective chains of command do not meet at any point below the Secretary and Deputy Secretary of Defense levels. As we state in our report, responsibility, authority, and accountability for DIMHRS (Personnel/Pay) is in fact diffused among three stakeholder groups with responsibility for requirements with the Joint Requirements and Integration Office, responsibility for acquisition with the Joint Program Management Office, and responsibility for transition to DIMHRS (Personnel/Pay) with the 11 end users’ organizations. Furthermore, under the current structure, only one of the three stakeholder groups, the Joint Requirements and Integration Office (JR&IO), is accountable to the Under Secretary, and authority over DIMHRS (Personnel/Pay) is spread across the three groups. Accordingly, the intent of our recommendation is for DOD to create an accountability structure that can set expectations for stakeholders and hold them accountable. 17. See comment 2. Furthermore, we agree that the department should not replicate “as is” processes. However, as our report points out, the users’ comment process did not provide for effective resolution; therefore, users stated that they were not willing to sign off on the requirements as sufficient to meet their needs. The intent of our recommendation is to ensure that the system’s functional acceptability to users is reasonably ensured before the system is developed, thereby minimizing the risk of more expensive system rework to meet users’ needs. 18. We do not believe and nowhere in our report do we state or suggest that stakeholders should be granted veto power. Also, see comments 2 and 3. 19. We disagree. As stated in our report, DIMHRS (Personnel/Pay) had a preliminary architectural certification with the Business Enterprise Architecture (BEA) in April 2003. However, DOD could not provide us with documented, verifiable analysis demonstrating this consistency and forming the basis for the certification, in part because the BEA was incomplete. Rather, we were told that this certification was based on the DIMHRS (Personnel/Pay) program manager’s stated commitment to be consistent at some future point, and the system is scheduled to undergo another certification before the system deployment decision. Moreover, we had previously reported that the BEA, including the military personnel and pay portions of the architecture, was not complete, and thus not in place to effectively guide and constrain system investments. As we state in our report, the real value in having an architecture is knowing, at the time when system definition, design, and development are occurring, what the larger blueprint for the enterprise is in order to guide and constrain these activities. 20. We disagree. See comment 21. 21. We disagree. As we state in our report, DOD is not following three relevant best practices. These practices are focused on effectively planning for the full complement of activities that are needed to prepare an organization for the institutional and individual changes that COTS-based system solutions introduce. Such planning is intended to ensure, among other things, that key change management activities, including the dependencies among these activities, are defined and agreed to by stakeholders, including ensuring that adequate resources and realistic time frames are established to accomplish them. In this regard, DOD agreed in its comments that it does not have an integrated master plan/schedule for the program, which is an essential tool for capturing the results of the proactive change management planning that the best practices and our recommendation advocate. Moreover, available plans did not include all activities that end-user organizations will need to make regarding organizational changes and business process improvements associated with the system, such as revising the duties that are now performed by pay specialists and personnel specialists. This concern was stated by representatives of the DIMHRS (Personnel/Pay) offices in the services and DFAS, who stated that current plans do not adequately address the activities, time frames, and resources they will need to complete the transition to DIMHRS (Personnel/Pay). Furthermore, at the time that we completed our review, DOD had yet to identify all the legacy systems that would interface with DIMHRS (Personnel/Pay), and so DOD could not estimate the time and resources that will be needed to develop and implement legacy system interfaces with DIMHRS (Personnel/Pay). Both published research and our experience in evaluating the acquisition and implementation of COTS-based system solutions show that the absence of well-planned, proactive organizational and individual change management efforts can cause these system efforts to fail. 22. See comment 21. Furthermore, among the ambiguities in the detailed requirements that we cite in our report are references that do not clearly state the associated practice or policy. 23. See comments 8 and 22. In addition and as stated in our report, the process used to define detailed requirements has yet to result in user acceptance. Specifically, according to end-user representatives from each of the services, the detailed requirements were difficult to understand because they were shared in a piecemeal fashion and did not include sufficient detail. Furthermore, these representatives stated that they were not willing to sign off on the requirements. 24. We do not dispute that DOD has provided demonstrations of and training on the COTS product. However, as we point out in our report, users still have questions on DIMHRS (Personnel/Pay), which will be based on the COTS product, including how it will be used to perform personnel and pay functions, and how it will change the roles and responsibilities of end users. Moreover, proactive management of the organizational and individual change associated with COTS-based system solutions requires careful planning for the full range of activities needed to facilitate the introduction and adoption of the system, and as we state in the report and DOD agreed in its comments, the department does not have the kind of integrated master plan that would reflect such planning. 25. We do not dispute that existing plans have been reviewed and approved by the contractor and an independent reviewer. However, we disagree that these plans sufficiently incorporate all the change management activities that are needed to position DOD for adoption and use of DIMHRS (Personnel/Pay). In the absence of an integrated master schedule, which the department acknowledges in its comments as yet to be developed, DOD cannot adequately ensure that the full range of organizational and individual change management activities will be effectively performed. 26. We support DOD’s stated commitment to follow a more event-driven risk-based approach, and have slightly modified our recommendation to recognize this commitment. Nevertheless, it is important to note that the approach that we found the department taking during the course of our review was schedule driven, meaning that program activities were truncated or performed concurrently in order to meet established deadlines. For example, as we describe in our report, data requirements (which are derived from higher-level information needs) were provided to the contractor before information needs were fully defined because the contractor needed these data requirements to complete the system design on schedule. Also during our review, the program had developed plans for accelerating system deployment in order to meet an externally imposed deadline. After we raised concern about the risks of accelerating the schedule, and the lack of adequate risk-mitigation strategies, DOD changed its plans. 27. At the time of our review, we observed that the contract was a driver for the schedule. For example and as our report states, in March 2004, JR&IO provided a version of the detailed requirements to the development and integration contractor in order to meet the contractor’s schedule, even though it had received thousands of comments on the requirements from users that it had yet to examine and resolve. 28. We support DOD’s comment that it will revise the schedule if events do not occur as anticipated because it is consistent with our recommendation. In addition to the person named above, the following persons made key contributions to this report: Nabajyoti Barkakati, Harold J. Brumm, Barbara S. Collier, Nicole L. Collier, George L. Jones, John C. Martin, Kenneth E. Patton, B. Scott Pettis, Mark F. Ramage, Karl W. D. Seifert, Robert W. Wagner, Joseph J. Watkins, and Daniel K. Wexler. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | The Department of Defense (DOD) has long-standing problems with its information technology (IT) systems supporting military personnel and pay. To address these problems, DOD initiated the Defense Integrated Military Human Resources System (DIMHRS) program, which is to provide a joint, integrated, standardized military personnel and pay system across all military components. In November 2004, DOD accepted the design for the first of three phases, DIMHRS (Personnel/Pay). GAO reviewed DOD's management of the requirements definition for the system as well as the program's management structure. DOD faces significant management challenges with DIMHRS, a major system acquisition program that is expected to lead to major changes in the processing of military personnel and pay. To its credit, DOD has begun taking steps to ensure that the requirements and the design for the first phase of the program are consistent with each other by tracing backward and forward between the detailed requirements and the system design, and it did obtain formal user acceptance of the DIMHRS (Personnel/Pay) high-level requirements. However, it has not obtained user acceptance of the detailed requirements. Furthermore, it has not ensured that the detailed requirements are complete and understandable. For example, requirements for the interfaces between DIMHRS (Personnel/Pay) and existing systems have not yet been fully defined because DOD has not yet determined how many legacy systems will be partially replaced and thus require modification. Furthermore, DOD is still determining whether the data requirements provided to the contractor for system design are complete. Finally, an estimated 77 percent of the detailed requirements are difficult to understand, based on GAO's review of a random sample of the requirements documentation. These challenges increase the risk that the delivered system capabilities will not fully meet the users' needs. Moreover, although DIMHRS (Personnel/Pay) is to be an integrated system, its development is not being governed by integrated tools and approaches, such as an integrated program management structure, enterprise architecture, and master schedule. Furthermore, while DOD is appropriately attempting to maximize the use of commercial, off-the shelf (COTS) products in building the new system, it has not adequately followed some important best practices associated with COTS-based system acquisitions. For example, DOD's program plan/schedule does not adequately recognize the needs of end-user organizations for the time and resources to integrate DIMHRS (Personnel/Pay) with their respective legacy systems and to prepare their workforces for the organizational changes that the system will introduce. DOD's requirements definition challenges and shortcomings in program governance can be attributed to a number of causes, including the program's overly schedule-driven approach and DOD's difficulty in overcoming its long-standing cultural resistance to departmentwide solutions. Unless these challenges are addressed, the risk is increased that the system will not provide expected capabilities and benefits on time and within budget. Given the limitations in some DOD components' ability to accurately pay military personnel, it is vital that these risks be addressed swiftly and effectively. |
Most VA major construction projects are for VHA medical facilities. To determine potential new major construction projects, VHA officials identify gaps in health service during their strategic planning process, and VHA officials in field offices develop capital needs plans to fill these service gaps. These capital plans are then reviewed by a Capital Investment Panel that gives each proposed project a score based on a number of factors, including, among other things, the plan’s effect on health care, safety, and energy use. The Capital Investment Panel then produces a priority list of projects, and the Secretary of VA determines how many projects to request for funding each year and works with the Office of Management and Budget (OMB) to produce VA’s part of the President’s budget. Some large projects, such as the construction of a new medical center, can be divided into distinct phases and funded over several years. When the President submits VA’s budget to Congress, the budget includes a prospectus for each proposed major construction project. This prospectus includes, among other things, a cost estimate for the project that VA staff has assembled. In addition, some prospectuses include an estimated month and year that the project will be completed, although this is not required by law. This prospectus is the initial estimate that VA sends to Congress. Congress uses this information to authorize and appropriate funds for the project. In 1999, we reported that with better management of its large, aged capital assets, VA could significantly reduce the funding used to operate and maintain underused, unneeded or inefficient properties. We further noted that the savings could be used to enhance health care services for veterans. Thus, we recommended that VA develop market-based plans for realigning its capital assets. In response, VA initiated a process known as the Capital Asset Realignment for Enhanced Services (CARES)—a comprehensive, long-range assessment of its health care system’s capital asset requirements. As a result of CARES, VA requested funding for about 30 new major construction projects in fiscal years 2004 and 2005. While 8 of these projects have been completed, many are among the 32 ongoing projects. This effort required VA to prepare initial estimates for each project over the course of a few months. In the 2 years prior to CARES, VA proposed fewer than five major construction projects each fiscal year. According to VA, the CARES process was a onetime major initiative. However, its lasting result was to provide a set of tools and processes that allow VA to continually determine the future resources needed to provide health care to our nation’s veterans. VA’s Office of Construction and Facilities Management (CFM) is responsible for administering major construction projects. Once a project has been authorized by law and Congress appropriates funds for it, CFM staff contracts with an architect/engineering (A/E) firm to design the project. The A/E firm develops an architectural design for the project and also produces a cost estimate for the entire project. This cost estimate is generally more detailed and accurate than the initial cost estimate. After the project has been designed, CFM then solicits bids for project construction and awards a construction contract. The construction contractor is responsible for developing a detailed construction schedule. CFM reviews the construction schedule and also assigns CFM engineers to work on-site as project managers to monitor the construction process until the facility is ready to be turned over to local VA staff. Once construction begins, the construction company is generally responsible for cost increases and schedule overruns under the terms of the fixed-price contract, unless VA and the contractor agree to a change order to the construction contract to modify scope, account for unforeseen conditions, or remedy a design error. We have reported that cost estimates that are completed when a project is in the conceptual stage have a high degree of uncertainty. As a project progresses, this degree of uncertainty decreases because risks are mitigated or realized. However, we have also found that cost estimates tend to be lower than the final project costs because program managers and decision-makers do not always consider all of the potential risks to a project and tend to be optimistic when planning a project. Cost estimating requires both science and judgment. Since answers are seldom—if ever—precise, the goal is to find a reasonable “answer.” Cost estimates are based on many assumptions, including the rate of inflation and when construction will begin. Generally, the more information that is known about a project and is used in the development of the estimate, the more accurate the estimate is expected to be. OMB’s guidance for preparing budget documents identifies many types and methods of estimating project costs. The expected accuracy of the resulting project cost estimates varies, depending on the estimating method used. While about half of VHA’s ongoing major construction projects are within budget, 18 projects have experienced cost increases and 11 have experienced schedule delays. The cost for one project has decreased since the original estimate for it was submitted to Congress. Eighteen of the 32 ongoing VHA major construction projects have experienced cost increases. When a project’s cost increases, VA can receive a new authorization and an additional appropriation from Congress. Without additional funds from Congress, VA must alter the scope of the project to ensure that the project does not exceed the amount Congress has appropriated for the project by more than 10 percent. The cost increases that these 18 projects have experienced since the estimates were initially submitted to Congress range from 2 to 285 percent. In addition to those 18 projects, the costs of 13 projects have not changed, and 1 project has experienced a cost decrease. Figure 1 shows the range of cost changes in ongoing VHA major construction projects. Five projects have experienced a cost increase of more than 100 percent. These projects include new construction and seismic corrections (which are improvements to a structure to make it less susceptible to earthquakes). For example, in its fiscal year 2006 budget submission, VA submitted a $286 million estimate to Congress for a new medical center in Las Vegas, Nevada. However, VA estimated in 2007 that the project would cost just over $600 million (an increase of 110 percent) and in 2008 the project’s authorization was modified and the project received an additional appropriation from Congress. However, VA now estimates that the project will cost about $100 million less than it anticipated. More information about the new medical facility in Las Vegas is in appendix V. Seven projects experienced a cost increase between 51 and 100 percent and six projects experienced a cost increase between 0 and 50 percent. These projects vary in size and type, from a modernization of patient wards in Georgia that is estimated to cost about $24.5 million to a new medical center in Louisiana that is estimated to cost $925 million. All projects that experienced a cost increase are listed in table 1. As of August 2009, the costs of 13 projects have not changed from their initial estimated cost. We found that VA reduced the scope of some projects so that the projects would not exceed their budget. For example, one project we visited in Cleveland, Ohio, is designed to consolidate two medical centers and construct a new facility at one of the medical centers. According to VA officials in Cleveland, VA reduced the original scope of the project by excluding room for 30 new patient beds in the new facility so that the project could stay within its budget. However, VA will make space for the 30 beds by expanding part of its existing facility through separate facility funds. VA staff made other changes to the original plan for the new facility, such as deleting balconies from patient’s rooms and using more concrete and less steel in the structure, so that the facility could be completed within budget. More information about the medical center consolidation in Cleveland is in appendix III. In addition to those projects that did not experience a cost increase, one project experienced a cost decrease. Specifically, the cost to construct a data center in West Virginia decreased from $35 million to $33.7 million, or about 4 percent. Eleven of the 32 ongoing projects are projected to be completed later than originally estimated. Even if the cost of a project has not increased, a schedule delay can lead to an increased cost to VA because CFM project managers must stay on to monitor the project as it is being built. A schedule delay can also affect veterans’ access to medical care, since VA constructs facilities where they are needed to serve the local veteran population and a schedule delay results in veterans waiting longer for the services to be available. Of the 11 projects that have experienced a schedule delay, 2 are scheduled to be completed within 2 months of their originally scheduled end date, 5 are scheduled to be completed between 12 and 24 months of their originally scheduled end date, and 4 are scheduled to be completed more than 24 months after their originally scheduled end date. These projects range from an electrical upgrade in Florida that is estimated to end less than a month after its initial estimated completion date to seismic corrections at a facility in Puerto Rico that are estimated to end about 7 years after their initial estimated completion date. The original estimated completion dates, the latest estimated completion dates, and the change in dates for those projects are in table 2. Information on the number of projects that experienced both a schedule delay and a cost increase is in appendix VI. Cost increases and schedule delays have been caused by factors that have generally occurred before construction of the project begins. These factors include initial estimates that were not thorough because they were completed quickly, scope changes that occurred after the initial estimate, and unforeseen events and market conditions such as a rise in construction costs. The CARES process required VA to quickly provide initial cost estimates for about 30 major construction projects. Specifically, in 2004 VA had about 3 months to provide initial cost estimates to Congress so that Congress could consider authorizing these projects and appropriating funds for them in fiscal years 2004 and 2005. According to VA, a number of VA staff worked to produce these initial estimates, including staff that had limited cost estimating expertise. The 30 projects included three new large medical centers in Las Vegas, Nevada; Denver, Colorado; and Orlando, Florida. Estimates prepared for these 30 projects were prepared quickly and sometimes based on rudimentary designs. For example, VHA officials in Syracuse told us that they had about 6 weeks to prepare their initial estimate for a new spinal cord injury center, which they did by using analogous estimating techniques such as the cost-per-square foot of new construction in Syracuse. As a result, the initial estimate was only a rough order-of-magnitude estimate. We have reported that, while it is possible to develop a rough order-of-magnitude estimate in days, a first-time budget- quality estimate would likely require many months. VA officials in Syracuse who worked to prepare this estimate told us that they were surprised when the project was included in VA’s fiscal year 2005 budget request because they knew that the estimate was only a rough order-of- magnitude estimate. In two of our case studies, the scope of the project changed substantially after VA submitted its estimate to Congress. VA officials also told us that scope changes have occurred in other projects. In Las Vegas, the initial estimate to Congress was based on plans for a large VA clinic. However, VA later determined that a much larger medical center was needed in Las Vegas after it became clear that an inpatient medical facility it shares with the Department of Defense would not be adequate to serve the medical needs of local veterans. This decision greatly increased the cost, delayed the completion date of the project, and required a modified authorization and an additional appropriation from Congress. Since the estimate for the Las Vegas medical center was based on a preliminary design for an expanded clinic, additional functions had to be added to the clinic design to provide the services necessary for the medical center. This expansion of the scope of the project resulted in both a cost increase and schedule delay for the project. In Syracuse, New York, the original design of a new Spinal Cord Injury/Disease (SCI/D) center that is being built on the campus of the VA medical center did not include money for additional parking. However, after the project had been authorized by Congress and was in design, VA officials in Syracuse commissioned a study to examine future parking needs at the medical center. The study concluded that, based on the new SCI/D center and projected demand from patients and staff, there should be an additional 429 to 528 parking spaces at the medical center. As a result of this study, VA officials in Syracuse decided to add two floors to the existing parking garage at an estimated cost of $10 million. Based on the parking garage addition and other changes to the project, VA received a modified authorization in 2006 and an appropriation of $23.8 million in fiscal year 2008 for the SCI/D center. More information about the new SCI/D center in Syracuse is in appendix IV. Failure to involve stakeholders early in the process can also lead to changes in scope. In Syracuse, the Paralyzed Veterans of America (PVA) objected to some aspects of the design of the SCI/D center. For example, PVA advocated for a dedicated entrance from the parking garage to the SCI/D center, which is being built on the fourth floor of the medical center. This dedicated entrance would allow veterans with spinal cord injuries to enter the center directly from the parking garage, without requiring the veterans to go down to the street from the parking garage, outside to the main entrance of the medical center, then up to the 4th floor of the medical center for treatment. According to VA staff in Syracuse, VA agreed to make changes that would improve access to the facility, and this increased the cost of the project. Changes in construction market conditions can escalate the costs of VA construction projects. The cost of many materials used in construction— from concrete to electrical equipment—increased more than the consumer price index (indicating that construction costs increased more than other costs) from 2003 through 2007. Specifically, the cost of these construction materials increased over 28 percent between 2003 and 2007, whereas the consumer price index increased about 13 percent over the same period. Hurricane Katrina drove up the cost of construction materials nationwide because the high demand for construction in the New Orleans region strained supplies of material and labor. In Las Vegas, several large billion- dollar projects created competition for construction services, and this area experienced an even greater cost increase as the demand for new construction exceeded supply of materials and labor. The schedule for one of our case studies was delayed by land acquisition issues. In Cleveland, while the project remains within budget, the project schedule was delayed 9 months because a property acquisition took longer than expected. Part of the land that the bed tower is being built on had been donated to the City of Cleveland for use as parkland. The city could not give the land to VA until the city was able to change the designated use of the donated land from parkland to a more general use. More information about the construction project in Cleveland is in appendix III. VA has developed a new process for determining its initial estimates that allows for more time between VA approving a project and submitting a cost and schedule estimate to Congress. However, VA does not analyze cost risks to examine the changing assumptions on the cost estimate. VA also does not have an integrated master schedule, which includes both VA and contractor effort for all phases of the entire project, and does not conduct a schedule risk analysis to help determine when projects will be completed. While VA is not required to develop an integrated master schedule and cost and schedule risk analyses, we have identified these steps as best practices in project scheduling and cost estimating. VA has developed a new process to improve its initial estimates for major construction projects. This new process allows VA to increase the time between VA approving a project and submitting that project, and its initial estimate, to Congress. According to VA officials, with this additional time, VHA will be able to gather more information about a project and begin preliminary design work. These officials noted that VA will ideally have as much as 35 percent of the design work completed before the project’s first estimate is submitted to Congress. Cost estimators can then use these designs to develop the initial cost estimate that VA sends to Congress. According to VA officials, the initial estimate should be more precise than estimates provided to Congress in the past because the scope of the project will be more developed. Until the fiscal year 2010 budget cycle, field staff in VHA produced the first estimate for a project. Beginning with the fiscal year 2010 budget cycle, for any project in the top 10 of the priority list, CFM will work with VHA staff in the field to produce the first estimate of the project’s cost. CFM staff includes professionals with estimating and construction engineering skills, whereas VHA staff in the field generally does not possess these skills. These new requirements were not in effect when the projects we studied were developed. Therefore, we were not able to evaluate the process. While it is unclear how much design work will actually occur before VA submits a project and its estimate to Congress, the new process holds promise to improve VA’s initial estimates, particularly if the new process requires early stakeholder input on a proposed project so that any resulting changes in the project scope can be incorporated into the estimate before it is submitted to Congress. After a project has been authorized and funded based on VA’s initial estimate, VA hires an architect/engineering firm to design the major construction project. The firm hires a contractor to develop a cost estimate for the project. We visited three major construction sites— Cleveland, Ohio, Las Vegas, Nevada, and Syracuse, New York. At these sites, we found that these cost estimates were generally comprehensive and well documented. Specifically, the estimate included an estimating plan, structure, purpose, and documentation. However, we also found that the cost estimates for projects in Cleveland and Las Vegas were not adequately maintained during construction because they did not include updated information based on actual costs as the project progressed. We also found that the estimates for projects in Syracuse and Las Vegas did not include a cost risk analysis to examine the effect of changing assumptions on the cost estimate. Conducting a cost risk analysis is particularly important because only by quantifying cost risk can management make informed decisions about risk mitigation strategies. Quantifying cost risk also provides a benchmark for measuring future progress. We identified best practices for estimating and managing program costs in a cost assessment guide we issued in 2009. As we note in our cost assessment guide, agencies should begin to follow these best practices at the earliest stages of the cost estimation process, which includes the preparation of the initial estimate submitted to Congress. Our cost estimating guide has been endorsed by OMB. More information on the cost estimates for these three sites is in appendices III through V. After the design is complete, VA hires a contractor to construct the project by the completion date set in the contract. The contractor then develops a construction schedule that details all of the activities that the contractor plans to finish by the completion date. Generally, the contractor must finish by the completion date or face financial penalties. At the sites we visited, we found that these schedule estimates, which occur after VA has submitted its initial estimate to Congress, generally followed best practices for scheduling. For example, we found that the contractor regularly updated the construction schedule with actual dates as the work progressed. All best practices for schedules, and the extent that they were met at our site visits, are in table 3. More detailed information is included in appendices III through V. Although VA met or partially met nearly all scheduling best practices at the three sites, VA does not conduct a schedule risk analysis of its major construction projects, and therefore cannot predict a project’s completion date with confidence. A schedule risk analysis, which is one of our best practices in project scheduling, uses statistical techniques to predict a level of confidence in meeting a project’s completion date. The objective of the analysis is to develop a probability distribution of possible completion dates that reflect the project and its quantified risks. This analysis can help project managers both understand the most important risks to the project and to focus on mitigating these risks. We conducted a schedule risk analysis of the construction schedule for the new medical center in Las Vegas, Nevada, that is scheduled to be completed on August 22, 2011. We conducted on-site interviews with staff who are working on the project in Las Vegas and asked them to discuss potential risks to the project, including how the risk would affect the project’s timeline and the likelihood of the risk occurring. Using this information, we developed a list of risks to the project (such as the chance that the design is inadequate or that labor is not available) and how each risk would impact the duration of specific activities in the schedule. We then used modeling software to run a Monte Carlo simulation, which consisted of the computer-generated results of 3,000 estimates of the future schedule based on the activities in the schedule, the chance that some activities would be affected by some risks, and the predicted affect of those risks on the duration of each activity. This analysis showed that there is a 50 percent probability that the project will be completed by March 1, 2012 (about 6 months after the current estimated completion date) and an 80 percent probability that the project will be completed by May 17, 2012 (about 9 months after the current estimated completion date). Although we did not conduct a schedule risk analysis for other VA major construction projects, the result of our analysis for the Las Vegas Medical Center project shows the types of risks that major construction projects face and the impact those risks can have on meeting project milestones. More information on our schedule risk analysis can be found in appendix V. We shared the results of our schedule risk analysis with CFM staff in Las Vegas. Specifically, we noted that we found the two biggest risks to the project are that the design may be inadequate and that the occupancy needs may change. CFM staff in Las Vegas told us that they are working to mitigate the risk of inadequate design and have discovered architectural drawings that do not include utilities. As a result, CFM has directed the architect/engineer firm to revise the drawings to include utilities. CFM staff also stated that they can deny any changes to the project scope and that they can choose not to allow changes that will affect the scheduled completion date. VA does not require an integrated master schedule for major construction projects that encompasses both VA and contractor effort for all phases of the entire project and shows the relationships between various project phases (such as design, construction, and when the project is “activated” for occupancy and use). However, we have stated that the success of any project depends, in part, on having an integrated and reliable schedule. Without a fully integrated and reliably derived schedule, it is difficult to estimate the overall cost and schedule of a project. In addition, individual phases of a multiphase project can be completed on time, but the project as a whole can be delayed and construction phases that are not part of an integrated master schedule may not be completed in the most efficient manner. For example, a VA nursing home in Las Vegas was completed in 2009 but cannot be put into service until another phase of the construction project—the on-site medical center—is completed and can provide medical care to residents of the nursing home. The medical center is scheduled to be completed in 2011. According to VA officials, VA decided to construct the new nursing home because construction costs in Las Vegas were escalating quickly, and VA officials thought that they could save money by constructing the nursing home as soon as possible. However, construction costs have recently decreased in the Las Vegas area, and VA must pay to maintain the new nursing home from 2009 to 2011 even though the nursing home will not be used for VA patients. Estimates for major construction projects, like any estimate of a future activity, can never be exact. Some of VA’s past estimates have been off- base, although the reasons for this are sometimes outside of VA’s control. These imprecise estimates resulted in Congress authorizing and appropriating millions of dollars for projects based on estimates that proved to be inaccurate. In some of these cases, VA was forced to change the scope of the project in order to stay within the original estimate or the projects’ authorizations were modified and Congress has had to appropriate more funds to allow VA to finish some projects. VA is taking steps to make its initial estimates more accurate in the future. VA is working to complete some preliminary design work on projects and improve initial estimates so that they are more likely to be closer to the actual costs and schedules of a project, but the effect of these changes on VA’s initial estimates remains to be seen. While VA is taking steps to improve its initial estimates, it does not always conduct a cost risk analysis, which would allow project managers to better identify issues that could lead to cost escalation and improve managers’ ability to make informed decisions on how to minimize cost risks. VA has also not used a schedule risk analysis to determine the likelihood of a major project being completed on time. We recognize that conducting a cost risk and schedule risk analysis takes both financial resources and some time and that it may only be appropriate to conduct these analyses when a project is particularly costly, complex, or has a compressed schedule. However, the overall effect of the analyses is to provide VA, congressional decisionmakers, and other stakeholders with more precise information about when a project will be completed and the main risks to a project being completed on time. With this information, VA could provide more accurate schedule estimates to stakeholders and could also work to mitigate risks to the project and ensure that the project is completed on time. We have identified cost risk and schedule risk analysis as best practices in our cost assessment guide, which has been endorsed by OMB. While the construction schedules we reviewed generally met best practices, VA’s lack of an integrated master schedule—which would integrate VA and contractor effort for all phases of a project, including all design and construction work—hampers VA’s ability to provide accurate information on the schedule for a project. Many factors that can delay a project, such as changes in scope and unforeseen site conditions, occur before construction begins. The use of an integrated master schedule could assist VA in monitoring the progress of a major construction project before construction begins and allow VA to increase the accuracy of its schedule estimates. To improve estimates of the cost of a major construction project as well as the risks that may influence the cost and how these risks can be mitigated, GAO recommends that the Secretary of Veterans Affairs direct CFM to conduct a cost risk analysis of major construction projects. To provide a realistic estimate of when a construction project may be completed as well as the risks to the project that could be mitigated, we recommend that the Secretary of Veterans Affairs direct CFM to take the following two actions. First, require the use of an integrated master schedule for all major construction projects. This schedule should integrate all phases of project design and construction. Second, conduct a schedule risk analysis, when appropriate, based on the project’s cost, schedule, complexity, or other factors. Such a risk analysis should include a determination of the largest risks to the project, a plan for mitigating those risks, and an estimate of when the project will be finished if the risks are not mitigated. We provided a draft of this report to VA for review and comment. VA generally agreed with our conclusions and concurred with our recommendations. In reference to our statement that some cost increases and schedule delays were attributable to scope changes, VA stated that it is important to note that VA followed all applicable laws and congressional notification requirements during the execution of the projects, and maintained the integrity and intent of each project as authorized by Congress. While we did not find any instances where VA did not follow applicable laws or congressional notification requirements, we did not specifically evaluate VA’s compliance with such laws and requirements because this was outside the scope of our review. VA’s letter is contained in appendix II. In addition, VA made a number of technical corrections, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Veterans Affairs. Additional copies will be sent to interested congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. In this report, we examined: (1) how costs and schedules of current Veterans Affairs (VA) major medical construction projects have changed since they were first submitted to Congress, (2) the reasons for cost and schedule changes in VA’s major medical construction projects, and (3) the actions VA has taken to address cost increases and schedule delays as well as the challenges VA faces in managing its major medical construction program. To address these issues, we reviewed pertinent laws relating to construction, authorization and appropriation of VA projects. We also examined the documents VA submitted to Congress, including the Office of Management and Budget’s form 300 provided with VA’s budget that has been required since 2006 and a project prospectus. We obtained and analyzed data that VA provided on the status of VA’s active major medical construction projects, as of August, 2009. We also reviewed VA’s management of construction projects at three locations and interviewed VA headquarters’ officials from the Veterans Health Administration (VHA) and the Office of Construction and Facilities Management (CFM) as well as project managers at the construction sites we visited. To determine how costs and schedules of current VA major medical construction projects have changed since they were first submitted to Congress, we reviewed VA data on current major medical construction projects, including the original cost estimates and completion dates submitted to Congress and the projects’ current status as of August 2009. We analyzed the current cost and completion dates against the information provided to Congress to determine the increase in costs and the extent to which projects exceeded or were expected to exceed the original time allotted and summarized the results. VA officials confirmed the reliability of the data provided for these projects. To identify the reasons for cost and schedule changes in VA’s construction projects, we interviewed VA headquarters officials regarding the status of all projects and examined project documents and interviewed on-site managers and engineers at three projects we selected. We selected projects based on VA-provided data on all of VA’s ongoing major medical construction projects as of March 2009. The data included a short project description, project location, the original and current total cost of the project, the original and current completion date, and the percent of construction completed. VA officials confirmed the reliability of the data provided. We selected projects for site visits based on the following criteria and the results cannot be applied to all of VA’s major construction projects: Construction projects were between 20 percent and 70 percent completed. Projects were estimated to cost $75 million or more. Projects were among those experiencing the greatest cost increases or schedule delays relative to other VA major medical construction projects. Projects were of different types of major construction projects because there could be factors in cost and scheduling that relate to one project type or factors that are systemic trends that occur across all project types. Project types include new construction, renovation of existing structures, expansion, or a combination of project types. Projects were selected from each of VA’s three regions to account for differences in management at VA regional offices that could impact cost increases and schedule delays. Based on our criteria, we selected three major medical construction sites: consolidation of the Brecksville Veterans Affairs Medical Center and the Wade Park Veterans Affairs Medical Center and construction of a new 90- bed tower for patient care in Cleveland, Ohio, estimated to cost $102.3 million and to be completed by September 2009 and now scheduled for February 2011; construction of Spinal Cord Injury Center, surgical suite renovation, and expansion of the parking garage in Syracuse, New York, originally estimated to cost $53.4 million and be completed by December 2009 and now estimated to cost $84,969,000 and be completed by May 19, 2012; and construction of a new, comprehensive Medical Center Complex in Las Vegas, Nevada, that will include a nursing home, ambulatory care center, primary and specialty care, surgery, mental health, rehabilitation, geriatric and extended care. Originally estimated to cost $286 million and be completed by September 2009, it is now expected to open in March 2012 and cost $600.4 million. The Las Vegas project will also include administrative and support functions and Veterans Benefits Administration offices. To identify the actions VA has taken to address cost increases and schedule delays as well as the challenges VA faces in managing its major medical construction program we reviewed the procedures that VA’s Office of Construction and Facilities Management put in place beginning in 2007. We also reviewed documentation and interviewed VA headquarters officials and project managers for the sites we visited to determine how estimated costs and schedules had been prepared. We then analyzed the cost estimates and schedules prepared for the three projects we visited and interviewed VA project managers and engineers, contractors, and cost estimators and schedulers to ascertain the extent to which their estimates and schedules compared with the best practices identified in previous GAO work. We used the GAO Cost Estimating and Assessment Guide (GAO-09-3SP), as criteria to analyze cost estimates. For this guide, GAO cost experts assessed 12 measures consistently applied by cost-estimating organizations throughout the federal government and industry and considered best practices for developing reliable cost-estimates. We analyzed the cost estimating practices used by VA in developing its cost estimates against these 12 best practices. After reviewing documentation submitted by the VA and information obtained during interviews, we determined the extent that the cost estimates met the characteristics of cost estimating best practices for the three projects we reviewed. For the purpose of this review, we grouped these practices into four characteristics of a high-quality and reliable cost estimate. They are Comprehensive: The cost estimates should include both government and contractor costs of the project over its full life cycle, from inception of the project through design, development, deployment, and operation and maintenance to retirement of the project. They should also provide a level of detail appropriate to ensure that cost elements are neither omitted nor double counted, and they should document all cost-influencing ground rules and assumptions. Well-documented: The documentation should address the purpose of the estimate, the project background and system description, its schedule, the scope of the estimate (in terms of time and what is and is not included), the ground rules and assumptions, all data sources, estimating methodology and rationale, the results of the risk analysis, and a conclusion about whether the cost estimate is reasonable. Therefore, a good cost estimate—while taking the form of a single number—is supported by detailed documentation that describes how it was derived and how the expected funding will be spent in order to achieve a given objective. For example, the documentation should capture in writing such things as the source data used and their significance, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against their sources. Finally, the cost estimate should be reviewed and accepted by management to ensure that there is a high level of confidence in the estimate and the estimating process. Accurate: The cost estimates should provide for results that are unbiased, and they should not be overly conservative or optimistic. Estimates are accurate when they are based on an assessment of most likely costs, adjusted properly for inflation, and contain few, if any, minor mistakes. In addition, the estimates should be updated regularly to reflect material changes in the project, such as when schedules or other assumptions change so that the estimate is always reflecting current status. Among other things, the estimate should be grounded in documented assumptions and a historical record of cost estimating and actual experiences on other comparable projects. Credible: The cost estimates should discuss any limitations of the analysis because of uncertainty or biases surrounding data or assumptions. Major assumptions should be varied, and other outcomes recomputed to determine how sensitive they are to changes in the assumptions. Risk and uncertainty analysis should be performed to determine the level of risk associated with the estimate. Further, the estimate’s results should be crosschecked, and an independent cost estimate conducted by a group outside the acquiring organization should be developed to determine whether other estimating results produce similar results. Our review of project schedules was based on research that identified a range of best practices associated with effective schedule estimating. In addition, we obtained the consulting services of David Hulett, Ph.D., to assist in our risk analysis of the Las Vegas Medical Center project schedule. We analyzed documentation submitted by the VA project office and construction staff for three of VA’s major medical construction projects. We also conducted multiple interviews with project managers, contractors, and schedulers to determine the extent that projects’ current schedule met the best practice criteria. These practices include Capturing all activities: The schedule should reflect all activities (steps, events, outcomes, etc.) as defined in the project’s work breakdown structure, to include activities to be performed by both the government and its contractors. Sequencing all activities: The schedule should be planned so that it can meet project critical dates. To meet this objective, activities need to be logically sequenced in the order that they are to be carried out. In particular, activities that must finish prior to the start of other activities (i.e., predecessor activities) as well as activities that cannot begin until other activities are completed (i.e., successor activities) should be identified. Identifying interdependencies among activities that collectively lead to the accomplishment of events or milestones can be used as a basis for guiding work and measuring progress. Assigning resources to all activities: The schedule should realistically reflect what resources (i.e., labor, material, and overhead) are needed to do the work, whether all required resources will be available when they are needed, and whether any funding or time constraints exist. Establishing the duration of all activities: The schedule should reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, data, and assumptions used for cost estimating should be used for preparing the schedule. Further, these durations should be as short as possible and should have specific start and end dates. Excessively long periods needed to execute an activity should prompt further decomposition of the activity so that shorter execution durations will result. Integrating schedule activities horizontally and vertically: The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with already sequenced activities (see previous section). These links are commonly referred to as “hand offs” and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and sub-tasks. Such mapping or alignment among levels can enable different groups to work to the same master schedule. Establishing the critical path for all activities: Using scheduling software the critical path—the longest duration path through the sequenced list of activities—should be identified. The establishment of a project’s critical path is necessary for examining the effects of any activity slipping along this path. Potential problems that may occur on or near the critical path should also be identified and reflected in the scheduling of the time for high-risk activities (see float below). Identifying float between activities: The schedule should identify float—the time that a predecessor activity can slip before the delay affects successor activities—so that schedule flexibility can be determined. As a general rule, activities along the critical path typically have the least amount of float. Conducting a schedule risk analysis: A schedule risk analysis uses a good critical path method schedule and data about project schedule risks as well as Monte Carlo simulation techniques to predict the level of confidence in meeting a project’s completion date, the amount of time contingency needed for a level of confidence, and the identification of high-priority risks. This analysis should focus not only on critical path activities but also on other schedule paths that may become critical. A schedule/cost risk assessment recognizes the inter-relationship between schedule and cost and captures the risk that schedule durations and cost estimates may vary due to, among other things: limited data, optimistic estimating, technical challenges, lack of qualified personnel, and other external factors. As a result, the baseline schedule should include a buffer or a reserve of extra time. Schedule reserve for contingencies should be calculated by performing a schedule risk analysis. As a general rule, the reserve should be held by the project manager and applied as needed to those activities that take longer than scheduled because of the identified risks. Reserves of time should not be apportioned in advance to any specific activity since the risks that will actually occur and the magnitude of their impact is not known in advance. Updating the schedule using logic and durations to determine the dates: The schedule should use logic and durations in order to reflect realistic start and completion dates for project activities. The schedule should be continually monitored to determine when forecasted completion dates differ from the planned dates, which can be used to determine whether schedule variances will affect downstream work. Maintaining the integrity of the schedule logic is not only necessary to reflect true status, but is also required before conducting a schedule risk analysis. The schedule should avoid logic overrides and artificial constraint dates that are chosen to create a certain result on paper. Individuals trained in critical path method scheduling should be responsible for updating the schedule. Based on our work, we determined the extent that estimates and schedules for the three projects we selected met the best practices criteria. Not Met—Project officials provided no evidence that satisfies any of the Minimally Met—Project officials provided evidence that satisfies a small portion of the criterion, Partially Met—Project officials provided evidence that satisfies about half Substantially Met—Project officials provided evidence that satisfies a large portion of the criterion, and Met—Project officials provided complete evidence that satisfies the entire criterion. We conducted this performance audit from October 2008 through December 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained meets these standards. The major construction project in Cleveland includes consolidating the Brecksville Veterans Affairs Medical Center and the Wade Park Veterans Affairs Medical Center, which are 26 miles apart. As part of this consolidation, a new bed tower is being built at the Wade Park Medical Center. This bed tower will contain a nursing home and space for psychiatric patients. The project is divided into two phases. Phase I includes the construction of an energy center and phase II includes the construction of a bed tower addition. The project was first initiated by the VA under the Capital Asset Realignment for Enhanced Services (CARES) process in 2004 to save money through consolidation and to provide better health care for veterans. According to VA officials, the two medical centers frequently worked together to provide health care for veterans. The Brecksville medical center was primarily a nursing home care unit and psychiatric care facility and the Wade Park medical center was primarily a surgical care facility. According to VA, it was very expensive to operate and maintain the two physical locations. Patients needing immediate care at the Brecksville medical center were sometimes taken to local area hospitals instead of the Wade Park medical center because of the distance between the two medical centers. Maintaining the two medical centers resulted in duplication of services, decreased operational efficiencies, and issues of continuity of care between the two medical centers. Other inefficiencies included ambulance and wheelchair van costs and outdated modes of providing health care. VA also intended for the project to meet rising demand for services in the Cleveland area and noted that the total number of unique patients at these 2 medical centers had increased. After considering four alternatives, the medical center staff determined that consolidating the two medical centers at Wade Park would lead to better health care for veterans and provide significant cost savings and other efficiencies. Specifically, consolidation would allow VA to avoid approximately $41 million in non- recurring maintenance and infrastructure improvements at the Brecksville medical center and gain approximately $10.6 million in operational savings per year. The cost estimate to consolidate the two facilities and construct a new bed tower at Wade Park has remained constant at $102.3 million. According to VA officials, the cost estimate is still reasonable for the project through completion. Of the $102.3 million, $15 million was appropriated in fiscal year 2004 and $87.3 million was appropriated in fiscal year 2008. To keep costs within budget, the VA closely monitored and reduced the scope of the major construction project. Some of the work was also shifted to a minor construction project. The medical center modified the design plans to eliminate 30 beds and one floor from the bed tower. The 30 beds will instead be relocated in the main hospital where space is being renovated to accommodate them. The funding for the 30 beds will not come from the appropriated construction funds. Rather, the 30 beds will be funded out of non-recurring maintenance (NRM) funds, which can be used to renovate spaces and purchase equipment needed as a result of that renovation. Our analysis of how the cost estimate met best practices is in table 4. The project was originally a one-phase project and scheduled to be completed in September 2008 but is now a two-phase project and is scheduled to be completed in February 2011. Before construction began, the project was broken into two phases because there was insufficient power capacity to keep the existing hospital functioning while the construction was being completed. As a result, an energy center was added to the design plan and its construction was separated from that of the bed tower. In addition, a property acquisition that took longer than expected delayed the project schedule by nine months. Part of the land that the bed tower is being built on was donated to the City of Cleveland for use as parkland. The acquisition process was prolonged because the City had to change the use of the donated land before the VA could begin construction. Phasing the project and the delayed property acquisition fostered a change in scope of the project and the project’s original completion date was moved from September 1, 2008, to November 9, 2010. The projected completion date was again extended to February 1, 2011, due to unforeseen site conditions. Specifically, during the construction of the bed tower, crews discovered and had to move a sewer line before they could continue. According to VA officials, February 1, 2011, is still the projected date for project completion. However, it was not possible for us to determine if the completion date is reasonable because the project’s construction schedule has not undergone a schedule risk analysis. We have identified a schedule risk analysis as a best practice in scheduling. As of August 2009, VA has completed the energy center and is constructing the bed tower addition. The construction schedule for this project generally followed best practices but, as stated, did not include a schedule risk analysis. Specifically, while the schedule met eight of nine scheduling best practices, the schedule did not undergo a risk analysis to determine the major risks to the schedule and the likelihood of the project being completed on time. Our analysis of how the schedule met best practices is in table 5. This project includes the construction of a 30-bed center for treating spinal cord injuries to be attached to the current VA medical center in Syracuse, New York. The project also includes adding two levels to the current parking garage. The project is divided into two phases. Phase I includes the addition on the parking garage and Phase II includes the construction of the Spinal Cord Injury/Disease (SCI/D) center. VA initiated this project under the Capital Asset Realignment for Enhanced Services (CARES) process in February 2004 because the Veterans’ Integrated Service Network (VISN) did not have the ability to treat acute spinal cord injuries. Syracuse had the only in-patient rehabilitation unit and SCI/D expertise within the VISN; so, VA decided to put the new SCI/D center in Syracuse. The project cost has increased from the original estimate submitted to Congress of $53.9 million to $84,969,000 (an increase of 58 percent). According to VA officials in Syracuse, this estimate was developed in about 6 weeks and was based on the total square footage required multiplied by the cost per square foot of new construction. Congress authorized $53.9 million for the project in 2004 and appropriated about $53.4 million in funds in the Consolidated Appropriations Act for FY 2005. According to VA officials in Syracuse, the main reason for the cost increase is that the initial estimate did not fully consider several factors. The original design of a new SCI/D center did not include money for additional parking. However, when the project had been approved by Congress and was in design, VA officials in Syracuse commissioned a study to examine future parking needs at the Syracuse medical center. The study concluded that, based on the new SCI/D center and projected demand from patients and staff, there should be an additional 429 to 528 parking spaces at the medical center. As a result of this study, VA officials in Syracuse decided to add two floors onto the existing parking garage at an estimated cost of $10 million. In addition to parking, stakeholders identified needed scope changes in the project. Specifically, the Paralyzed Veterans of America (PVA) insisted that there be a dedicated entrance from the parking garage to the SCI/D center, which is being built on the 4th floor of the medical center. This dedicated entrance would allow veterans with spinal cord injuries to enter the center directly from the parking garage, without requiring the veterans to go down to the street from the parking garage, outside to the main entrance of the medical center, then up to the 4th floor of the medical center for treatment. According to VA staff in Syracuse, VA agreed to make changes that would improve access to the facility, and this increased the cost of the project and delayed the project’s schedule. As a result of these changes to the project’s scope, VA received an additional $23.8 million from Congress in fiscal year 2008. Our analysis of how the cost estimate for the SCI/D center met best practices is in table 6. VA initially estimated that the project would be completed by December 6, 2009. VA awarded the contract to construct the SCI/D center on August 12, 2009, and estimates that the SCI/D center will be completed in May 2012, or 29 months after the first estimated completion date. The schedule delays and cost increases occurred before construction began, and once construction commenced we found that the construction schedule for this project generally followed best practices. Specifically, the schedule met eight of nine scheduling best practices but did not include a schedule risk analysis. The schedule did not undergo a risk analysis to determine the major risks to the schedule and the likelihood of the project being completed on time. Our analysis of how the schedule met best practices is in table 7. This project involves construction of a comprehensive Medical Center Complex in Las Vegas, Nevada. The complex will consist of up to 90 inpatient beds, a 120-bed Nursing Home Care Unit, Ambulatory Care Center, primary and specialty care, surgery, mental health, rehabilitation, geriatrics and extended care, as well as administrative and support functions. VA also plans to include Veterans Benefits Administration offices attached to the medical center. The project is divided into four phases. Phase I includes the construction of a new utility building and related infrastructure such as streets, sewers, and connections to electric and water utilities that are miles away from the construction site. Phase II includes the construction of the foundation of the new medical center. Phase III includes the construction of the Nursing Home Care Unit and Phase IV includes the construction of the medical center and the Veterans Benefits Office. VA initiated the medical center project under the Capital Asset Realignment for Enhanced Services (CARES) process between 2003 and 2004 because, according to VA officials, the increase in the number of Iraq war veterans needing medical care combined with the growth in the Las Vegas area supported building a large medical center. Out-patient medical care for veterans in the area was provided at 15 leased primary care clinics located throughout the Las Vegas area. In-patient services were provided under a joint venture with the Air Force’s Mike O’Callaghan Federal Hospital located at Nellis Air Force Base. However, some VA patients had to be sent to other VA hospitals for care that could not be provided at the Mike O’Callaghan hospital such as spinal cord injuries. VA officials said they initially sought to expand its medical services and construct a nursing home at Nellis Air Force Base in 2004, but the Air Force would not agree to such an expansion and advised VA that the number of veterans’ in- patient beds would likely have to be reduced in the future. As a result, VA decided to construct a new comprehensive medical complex, including a nursing home care unit. The cost of the medical center has increased from an initial estimate of $286 million in 2004 to a current estimate of $600.4 million (an increase of 110 percent). In accordance with these increased cost estimates, Congress has appropriated $600.4 million for the medical center, providing $60 million for fiscal year 2004, an additional $199 million for fiscal year 2006, and $341.4 million for fiscal year 2008. The original estimate to Congress was based on plans for a large VA clinic. However, VA later determined that a much larger medical center was needed in Las Vegas after it became clear that an inpatient medical facility it shares with DOD would not be adequate to serve the medical needs of local veterans. Since the estimate for the Las Vegas medical center was based on a preliminary design for an expanded clinic, additional functions had to be added to the clinic design to provide the services necessary for the medical center. This expansion of the scope of the project resulted in both a cost increase and schedule delay for the project. According to VA officials, a lack of planning and the omission of key facilities contributed to the cost increases. Specifically, VA officials stated that the original cost estimate did not correctly anticipate the amount of preparation that the site needed. For example, the original estimate did not include funding for the roads and street lights required for the facility. In addition, the medical center could not anticipate that the Department of Homeland Security would institute new requirements for federal facilities as part of its continuing response to the events of September 11, 2001, which resulted in further cost increases. VA officials also explained that the nationwide increase in construction, the rebuilding in the New Orleans area since hurricane Katrina, and the local building boom in Las Vegas have driven up the cost of material and labor. The Las Vegas area had several multi-billion dollar projects underway. Locally, construction costs increased over 20 percent in 2006 and 2007 while the standard that VA uses for contingencies is 5 percent. To illustrate, VA staff told us that Las Vegas builders had tied up almost 80 percent of the nation’s large cranes used to build tall buildings. According to VA officials, in response to the increasing costs, the VA and the architectural/engineering firm preparing the medical center design reduced the scope of work for the final phase of the project. Gross square footage was reduced from about 900,000 square feet to 785,000 square feet and they eliminated extra space between floors for mechanical and electrical cables that would have made maintenance easier. They also reduced warehouse space and space for administrative offices because estimators were concerned that the project could not be completed with the funds available. The medical center warehouse, which is used to store maintenance and medical supplies, was reduced to one-third of its originally proposed size. As, a result, the hospital will need to acquire warehouse storage and procure warehouse management services from contractors outside of the VA facility. The economic recession that began in 2008 led several companies to suspend their construction projects in Las Vegas, and there was greater competition among construction firms to construct the hospital. This change in the construction market led to a significantly lower cost of construction than VA staff had anticipated, and VA now estimates that the total project will cost about $100 million less than estimated. As a result, VA officials explained they are taking steps to add these features back into the medical center prior to completion. For example, a utility tunnel running from the utility building to the medical center was added back to the project once the construction contract was awarded and VA saw they had funds available. Adding this tunnel will reduce operating and maintenance costs for the medical center. VA officials are also reviewing their options for adding back features that had been eliminated such as administrative offices. This would save operating costs by eliminating the need to lease office space. Our analysis of VA’s current cost estimate for the construction of the medical center is in table 8. The first two phases of the project have been completed and, according to VA officials, Phase III will be completed in February 2010. However, the nursing home completed in Phase III of the project will not be open for patient care until the medical center becomes operational in 2012, as the nursing home relies upon the hospital for patient medical care and food service. Since the nursing home will be vacant for about 2 years before the medical center opens, VA may use part of the nursing home for administrative offices. The final phase of the project, the construction of the new medical center, is underway with completion scheduled for August 2011. According to VA officials, the medical center is scheduled to become operational in the spring of 2012, depending upon how quickly the equipment for the hospital can be purchased and the additional personnel can be hired. Our analysis of the construction schedule of the medical center is in table 9. The sole best practice that the schedule did not meet is conducting a schedule risk analysis (SRA), which is not required by the VA schedule specifications. VA officials told us that they do not conduct SRAs and that a risk analysis is typically not performed in the construction industry. In August and September 2009, we performed our own schedule risk analysis on the construction schedule. Specifically, we analyzed the C07P schedule, which was the latest statused version available to us at the time of the analysis. A schedule risk analysis uses statistical techniques to predict a level of confidence in meeting a program’s completion date. This analysis focuses on critical path activities and on near-critical and other activities, since any activity may potentially affect the program’s completion date. The objective of the simulation is to develop a probability distribution of possible completion dates that reflect the program and its quantified risks. From the cumulative probability distribution, the organization can match a date to its degree of risk tolerance. For instance, an organization might want to adopt a program completion date that provides a 70 percent probability that it will finish on or before that date, leaving a 30 percent probability that it will overrun, given the schedule and the risks. The organization can thus adopt a plan consistent with its desired level of confidence in the overall integrated schedule. This analysis can give valuable insight into what-if drills and quantify the impact of program changes. In developing a schedule risk analysis, probability distributions for each activity’s duration have to be established. Further, risk in all activities must be evaluated and included in the analysis. Some people focus only on the critical path, but because we cannot know the durations of the activities with certainty, we cannot know the true critical path. Consequently, it would be a mistake to focus only on the deterministic critical path when some off-critical path activity might become critical if a risk were to occur. Typically, three-point estimates—that is, best, most likely, and worst case estimates—are used to develop the probability distributions for the duration of workflow activities. Once the distributions have been established, a Monte Carlo simulation uses random numbers to select specific durations from each activity probability distribution and calculates a new critical path and dates, including major milestone and program completion. The Monte Carlo simulation continues this random selection thousands of times, creating a new program duration estimate and critical path each time. The resulting frequency distribution displays the range of program completion dates along with the probabilities that these dates will occur. Table 10 provides a range of dates and the probability of the project completing on those dates or earlier, based on our 3,000-iteration Monte Carlo simulation. For example, according to our SRA, there is a 5 percent chance that the project will finish on or before December 1, 2011. Likewise, there is an 80 percent chance that the project will finish on or before May 17, 2012. Because completion on any date is uncertain, it is more realistic to show a range of possible completion dates than to focus on a single date. In deciding which percentile to use for prudent scheduling, there is no international best practice standard. The chosen percentile depends on the riskiness and maturity of the project. For some projects we emphasize the 80th percentile as a conservative promise date. While the 80th percentile may appear overly conservative, it is a useful promise date if a number of new but presently unknown risks (i.e., “unknown unknowns”) are anticipated. The 50th percentile date may expose the project to overruns. In the case of the medical center construction schedule, our analysis concludes that VA should realistically expect turnover from the general contractor between March 1, 2012, and May 17, 2012, the 50th and 80th percentiles, respectively. The must finish date of August 29, 2011, is very unlikely. Our analysis shows the probability of completing medical center turnover by October 20, 2011, is less than 1 percent with the current schedule without risk mitigation. The project executive identified 22 different risks as a preliminary exercise to our SRA. Using these risks as a basis for discussion, we interviewed 14 experts familiar with the project, including VA resident engineers, general contractor officials, and A/E consultants. Each interviewee was asked four general questions: 1. To provide an estimate of the probability an identified risk will occur on the project in such a way that some activity durations are affected. The estimated probability is translated into the percentage of the iterations that are chosen at random during the simulation. For example, if the expert believed weather has a 10 percent chance of affecting some activities, then, on average, weather risk will occur in 10 percent of the Monte Carlo iterations. 2. If the interviewee believed the risk could occur, the interviewee was asked to identify which activities’ durations would be affected. For example, activities related to steel erection or concrete pouring may be affected if the weather risk occurs. 3. Upon identifying affected activities, interviewees were then asked to provide a 3-point estimate for the impact on duration. These are low, most likely, and high impact estimates. Estimates were provided as percentages, which were applied to the activity durations in the Monte Carlo simulation if the risk occurred. For example, if the weather risk occurs, a 10-day steel erection activity may be affected a minimum of 110 percent, a most likely of 150 percent, or a maximum of 200 percent (i.e., the 3-point estimates for steel erection if weather risk occurs are 11 days minimum, 15 days most likely, and 20 days maximum). If the risk does not occur, there is no change to the original estimated duration. 4. Finally, interviewees were asked to identify any risks they believe we did not account for. We began the interviews with 22 risks and through the interview process identified 11 more risks. During data analysis, some risks were consolidated with others or eliminated due to a low amount of data. In all, 20 risks were identified and incorporated into the Monte Carlo simulation. These include 18 risk drivers, 1 schedule duration risk, and 1 overall system commissioning activity that was not included in the baseline schedule. The final risk drivers used in the SRA are: Occupancy needs may change. Design may be inadequate. Steel design may be inadequate. Medical technology may change. Work may be misfabricated. Equipment may not meet design requirements. Subcontractors may fail. Suppliers may not deliver equipment on time. Resident Engineer (RE) staffing may be inadequate. Contractor field office staffing may be inadequate. Architect/Engineer (A/E) staffing may be inadequate. Labor may not be available. Contractor coordination problems may exist. Communication between RE, contractor, and A/E may be ineffective. May experience problems testing systems. Construction disciplines may not be coordinated. Vendor drawings may not be submitted on time. Change orders under $100,000 may affect schedule. Most risks received multiple responses during the interviews. During data analysis, we combined and analyzed data from the interviews to create ranges and probabilities for each of the 18 risk drivers. Because risks are multiplicative, several risks occurring on the same activity may overestimate the true risk. That is, in the Monte Carlo simulation, risks occur in a series, one after another, so that an activity that has several risks may be unrealistically extended if all risks occur. For example, drawing approval activities may possibly be affected by RE, contractor field office, or A/E staffing being inadequate, as well as the schedule duration risk. If all risks occurred, drawing approval activities will most likely be overestimated. In reality, an activity may successfully recover from two or more risks simultaneously, so that the actual risk is not multiplicative. Therefore, to avoid overestimation of risk, the impact ranges of risks that occur together are reduced. That is, the 3-point duration estimates for risks that occur together frequently were reduced; in this particular analysis, we decreased the estimated duration impact ranges by a factor of 0.3. This adjustment helps temper any over-estimated risk caused by a multiplication of risk factors. Of the 6,098 activities in the schedule, 3,193 had risk drivers assigned to them. Some activities had one or two risks assigned, but some had as many as seven assigned. Risks can impact the schedule in several ways: they can have a high probability of occurring, have a large percentage impact on the durations of the activities they affect, and/or they can apply to risk-critical paths, which may differ from the baseline deterministic critical path. Beyond applying 20 risks to the schedule, we are interested in identifying the marginal impact of each risk. That is, we are interested in identifying which risks have the largest impact on the schedule, because these are the risks that should be targeted first for mitigation. To find the marginal impact of a risk on the total project risk at a certain percentile, the Monte Carlo simulation is performed with the risk removed. The difference between the finish dates of the simulation with all the risks and the simulation with the missing risk yields the marginal impact of the risk. Table 11 gives the priority of risks at the 80th percentile and the marginal impact of each risk. The marginal impact directly translates to potential calendar days saved if the risk is mitigated. Once risks are prioritized at the percentile desired by management, a risk mitigation workshop can be implemented to deal with the high-priority risks in order. The prioritized list of risks will form the basis of the workshop, and risk mitigation plans can be analyzed using the risk model to determine how much time might be saved. Project managers cannot expect to completely mitigate any one risk nor is it reasonable to expect to mitigate all risks. In addition, risk mitigation will add to the project budget. However, some opportunities may be available to partially mitigate risks. During our interviews with the local VA office in North Las Vegas, we identified several missing activities: Redesign for ductwork. Submittal, approval, fabrication, and delivery of all Division 16 (electrical equipment). Effort related to building the tunnel from the central plant to the hospital basement. VA-furnished equipment delivery to the general contractor. Systemwide testing. Effort related to telecommunications. Missing activities will lead to an underestimation of schedule risk because these activities may become critical either in the baseline schedule or the SRA. In particular, the missing fabrication and delivery of electrical equipment assumes that the equipment will be at the construction site when needed. Since the schedule does not contain activities for the delivery of this equipment, risks leading to delays in delivery of electrical equipment are not reflected in the SRA results. Additionally, during our analysis, we identified 58 remaining activities with finish dates that did not drive successor activities. That is, the activities are open ended. This is a potential problem because the open-ended activity can have an extended duration and not drive any successor in the SRA simulation. However, officials stated that they were aware of these open ends and they did not believe them to be an issue. We found some projects that experienced both cost increases and schedule delays, while other projects experienced only a cost increase and still others experienced only a schedule delay. All projects, and whether they experienced a cost increase, a schedule delay, or both, are in table 12. In addition to the contact person named above, Tisha Derricotte, Colin Fallon, Hazel S. Gumbs, Ed Laughlin, Jason T. Lee, Susan Michal-Smith, Karen Richey, John W. Shumann, and Frank Taliaferro also made key contributions to this report. | The Department of Veterans Affairs (VA) operates one of the largest health care systems in the country. As of August 2009, VA's Veterans Health Administration (VHA) had 32 major ongoing construction projects with an estimated total cost of about $6.1 billion and average cost per project of about $191 million. Some of these projects were initiated as part of VA's Capital Asset Realignment for Enhanced Services (CARES) process, which was a comprehensive assessment of VHA's capital asset requirements. In response to a congressional request, GAO (1) described how costs and schedules of current VHA major construction projects have changed, (2) determined the reasons for changes in costs and schedules, and (3) described the actions VA has taken to address cost increases and schedule delays. To do its work, GAO reviewed construction documents, visited three construction sites, and interviewed VA officials. While about half of the 32 major ongoing construction projects are within their budget, 18 projects have experienced cost increases and 11 have experienced schedule delays since they were first submitted to Congress. Five projects have experienced a cost increase of over 100 percent. For example, the cost of a new medical center in Las Vegas rose from an initial estimate of $286 million to over $600 million, an increase of about 110 percent. Thirteen projects have experienced cost increases of between 1 and 100 percent. In addition, 11 projects have experienced schedule delays, 4 of which are more than 24 months. There are several reasons for construction project cost increases and schedule delays, including VA preparing initial cost estimates that were not thorough, significant changes to project scope after the initial estimate was submitted, and unforeseen events such as an increase in the cost of construction materials. According to VA officials, VA prepared numerous estimates during the CARES process, and some of these estimates used rudimentary estimating techniques such as average cost-per-square-foot and were completed by VA staff that did not have cost estimating expertise. The scope of some projects changed after VA submitted an estimate to Congress, which increased the projects' costs. For example, the scope for the original design for a new medical center in Las Vegas did not fully account for the amount of medical services the center would need to provide. As a result, the original estimate of $286 million rose to over $600 million. VA has taken steps to improve initial construction project cost estimates, but could better assess the risks to costs and schedules. VA plans to prepare more comprehensive estimates after approving projects and before submitting them to Congress. It is not clear how effective this new process will be, but it could improve VA's estimates. While VA contractors follow construction scheduling procedures that generally meet best practices, VA does not conduct cost or schedule risk analyses, which use statistical techniques to predict risks that can lead to cost increases and schedule delays. Thus, VA cannot quantify the largest risks to a project or mitigate those risks. For example, GAO conducted a schedule risk analysis for a medical center in Las Vegas and found that there is a 50 percent chance that the project won't be finished until more than 6 months after its estimated completion date. VA also does not require an integrated master schedule that includes VA and contractor efforts for all project phases, which can be critical to a project's success. |
VBA’s business environment encompasses many difficult challenges. These include a backlog of disability claims, improving a number of relationships with other organizations that affect how VBA does its work, and responding to its customers who are frustrated about the long-standing need to improve the accuracy and timeliness of processing claims. To deal with these issues, as well as cope with today’s constrained budgetary climate, the agency is undertaking a number of major initiatives, including beginning a business process reengineering effort for its compensation and pension programs, restructuring its regional office responsibilities, and consolidating its data centers. VBA has, however, been proceeding without an overall business strategy clearly setting forth how it will improve its performance and tackle entrenched service-delivery problems. For example, the reported backlog of original and reopened disability claims increased from 378,000 in fiscal year 1990 to a high of 571,000 at the end of December 1993. This rise was due to several factors, including increasing complexity in claims processing and the use of inexperienced regional claims raters. VBA instituted several conventional stopgap measures to deal with this backlog. It authorized extensive overtime, shifted workloads among regional offices, purchased information technology equipment, increased the number of claims raters by about one third (from 667 to 897), and relaxed some paperwork requirements, such as accepting photocopies of certain documents. As a result the backlog has been reduced, but it is now still about 380,000—similar to the 1990 level. Similar trends have been experienced in the processing times for original disability compensation claims, which rose from an average of 151 days in fiscal year 1990 to 213 days in fiscal year 1994. The stopgap measures used to decrease the backlog have also reduced the average processing time in fiscal year 1995 to 161—10 days more than the level in fiscal year 1990. VBA officials acknowledge that these measures cannot be sustained over a prolonged period of time. VBA must, therefore, find other solutions to achieve greater service-delivery breakthroughs. Other entities also affect the speed with which VBA processes claims and the agency’s overall direction. For example, VBA relies on the Veterans Health Administration for most medical information needed to substantiate a disability claim, and the Department of Defense for information relating to a veteran’s service time and conditions of discharge, as well as medical information from the veteran’s tour of active duty. Delays by either of these organizations can have a significant impact on the timeliness of VBA’s claims processing. Judicial review organizations also affect VBA’s workload and backlog. For example, the Board of Veterans’ Appeals returns almost half of its cases to VBA regional offices for additional development and reconsideration each year. The Board itself also has a significant and increasing backlog of cases; its appeals grew from about 19,500 in fiscal year 1990 to more than 50,000 in fiscal year 1995—an increase of more than 150 percent. It takes the Board about 2 years to render a decision from the date it receives an appeal. In addition, VBA—like most federal agencies—must deal with constrained resource levels and, at the same time, maintain existing levels of service and operations. VBA is in the process of restructuring its regional offices in an effort to cope with declining resources. At the same time, funding for VBA’s information technology initiatives is discretionary and, as such, comes under close budgetary scrutiny by the Congress and the Office of Management and Budget (OMB). A comprehensive business strategy is needed—one that includes developing strategic and information resources management plans, setting performance goals and measures, and incorporating the results of major agency initiatives, such as business process reengineering. VBA is moving in this direction; currently, however, it has no clearly articulated business strategy. Recent legislative changes provide the framework for VBA to develop such a strategy and identify the tools needed to implement it. For example, the Government Performance and Results Act of 1993 requires agency heads to submit to OMB and the Congress a strategic plan for program activities, including a mission statement, goals and objectives, and a description of how these will be achieved and what key factors could affect their achievement. The act also requires that agencies prepare annual performance plans for each program—performance indicators that will allow measurement of outputs and service levels. In addition, the Information Technology Management Reform Act of 1996 requires agency heads to establish goals for improving the efficiency and effectiveness of agency operations and, as appropriate, the delivery of services to the public, through more effective use of information technology and business process reengineering. VBA’s weaknesses in planning have been well documented since 1987. VBA’s planning process has been cited by us and others for (1) not having specific, measurable goals and objectives against which progress can be assessed and (2) not analyzing the costs and benefits of alternative approaches to modernization. According to VBA officials, they are in the process of developing strategic and information resources management plans and will have them ready to use in preparing the agency’s budget submission for fiscal year 1998. Assistance in this area could come from the National Academy of Public Administration, which has recently been commissioned by the Senate Appropriations Committee. In the Committee’s September 1995 report on the 1996 appropriations bill, the Committee provided $1 million to the Academy for a comprehensive assessment of VBA, with particular emphasis on the specific steps required to make claims processing more efficient and less time-consuming. The Academy will evaluate the modernization initiative and its link to strategic goals and priorities, efforts to reengineer VBA’s claims-processing methodology, performance measures for restructuring, and the roles of the Board of Veterans’ Appeals and the Court of Veterans Appeals. As of a few weeks ago, VBA was still working out the details of this study with the Academy. VBA also needs to develop a full set of performance goals or measures. At present, processing timeliness is the primary performance measure that VBA uses. Customer-focused goals, aimed at improving the quality of service, are needed. For example, a VBA survey of “stakeholders”indicated that , in their view, an emphasis on quality over productivity alone would be the key to service excellence at VBA. These stakeholders defined quality as making the correct award decision the first time, which would improve the timeliness of claims processing and reduce the number of appeals filed. VBA’s current goal for claims processing was set without the benefit of any clear plan. For example, its goal is to reduce average original compensation claims processing time to 106 days by 1998; this goal was set as part of a 1993 agreement with OMB to establish outcome-oriented performance goals. The performance goal is not linked to a business strategy or plan that explains how the agency intends to achieve this goal. Reengineering is key to achieving major performance improvements that VBA establishes as business goals. As our 1994 study pointed out,organizations that successfully develop information systems do so only after thoroughly analyzing and redesigning their current business processes. Information system projects that do not first consider business process redesign typically fail, or reach only a fraction of their full potential. In response to concerns raised by us and others over the past 3 years, VBA is preparing to reengineer its compensation and pension claims-processing operations, and has taken several positive steps. In November 1995 the agency established a Business Process Reengineering Office, and subsequently adopted a business process reengineering methodology. It also hired a consultant to assist with reengineering. By the end of this month, a business process reengineering team comprised of VBA staff and the consultant is expected to have completed a key step in the process by developing a proposal for changing the compensation and pension business processes. This proposal will be submitted to VBA management for review and approval before implementation. VBA also plans to begin a different business analysis project each year for its other four business areas. The next area planned for such an analysis is educational assistance. It is still too early to judge whether the current business process reengineering effort will help VBA achieve its goals, but we continue to have some concerns about VBA’s current business process reengineering focus and approach. For example, VBA has not yet set quantifiable performance measures using the experiences and performance of other leading claims-processing organizations. Also, the scope of VBA’s analysis and reengineering of its business processes in the compensation and pension area does not address the claims appeal process, which has a significant impact on the timeliness and quality of some claims-processing decisions. Finally, as I will discuss later, we are concerned that reengineering is not the driver behind all of VBA’s information technology initiatives. To solve entrenched problems and sustain long-term improvements in service delivery and operations, VBA must first know exactly what it needs to pay attention to and where it wants to go. A business strategy containing specific goals and performance measures is absolutely essential. By effectively using the framework established in recent legislation to develop the business strategy and complete its strategic and information resources management plans, VBA will go a long way toward setting out a clear path to be followed. VBA’s investment in modernization activities has yielded some improvement in hardware and software applications. However, it is difficult to measure return on any of these investments. As shown in attachment 1, between fiscal years 1986 and 1995, VBA reported that it obligated about $688 million for information technology, of which about $284 million, or about 40 percent, was for systems modernization. In December 1992 VBA awarded the first contract in its planned three-stage procurement. During stage I, VBA acquired a number of personal computers, local area networks, minicomputers, and commercial off-the-shelf software for its 58 regional offices; during stage II, VBA procured imaging equipment and associated software. Stage III was suspended in 1994; during this stage, VBA was to procure mainframe computers for its data centers in Hines, IL, and Philadelphia. VBA has also realized some limited benefits from the development of several short-term, targeted software applications that are being used on equipment acquired during stage I. These projects include the following: Control of Veterans Records—used to track the location of veterans’ claims folders containing application-related information; Rating Board Automation—used to generate letters to veterans regarding Personal Computer-Generated Letters—used to prepare general letters to disability claimants. To help manage its information technology investments in a way that will lead to major returns, VBA must now meet the challenges of new information technology legislation that has been modeled after the best practices of leading private and public organizations. For example, the Information Technology Management and Reform Act and the Paperwork Reduction Act require agency heads to analyze the agency’s mission and, on the basis of this analysis, revise business processes as appropriate; design and implement a process for maximizing the value and assessing and managing the risks of information technology acquisitions; integrate budgetary, financial, and program management decisions in this process; and use this process to select, control, and evaluate the results of information technology initiatives. VBA needs to make major improvements in the way it manages its information technology investments to meet these legislative requirements. Our analysis of past and current VBA information technology initiatives shows that VBA lacks the critical cost, benefit, and risk information necessary to determine whether it has made worthwhile investments. Our analysis also shows that these initiatives preceded VBA’s business process reengineering effort, which increases the risk that they may need to be substantially changed or abandoned once reengineering results become available. For example: Between fiscal years 1993 and 1995, VBA purchased 24 minicomputers without having a clear understanding of the software applications to be placed on the equipment or the benefits to be derived from this investment. Although VBA expected to use these minicomputers in processing claims, they were not put into use until recently, when VBA began testing its software application to track claims folders. This was done at four sites: Baltimore; St. Petersburg; San Juan, Puerto Rico; and Winston-Salem, NC. At VBA’s educational assistance processing sites in Atlanta and St. Louis, the agency has acquired and is in the process of installing imaging equipment to scan all documents in the chapter 30 education claims folders, which contain an average of 30 documents each. VBA has not, however, performed any reengineering analysis for the educational assistance area to assess how the imaging equipment could be used to improve education claims processing. In addition, while VBA has begun to collect baseline information to compare against post-implementation data in order to determine what impact the equipment will have on its operations at the Atlanta site, such information has not been collected for St. Louis, which has been using such equipment since 1987. Also, this past March VBA embarked on a 2-year effort at its St. Petersburg regional office to replace its current benefits payment system. The objectives of this replacement system were to (1) permit more timely updating of master benefit files through on-line access, (2) provide national access to service organizations that must respond to veterans’ questions about the status of their claims, and (3) address the potential effects of processing benefits payments and other critical information after the turn of the century. This recent project has several inherent risks that must be assessed before VBA can determine if this initiative will be worth the investment. First, the project team, comprised of VBA staff and contractor personnel, will be using a new software development language and a rapid application development methodology. While this methodology is used more frequently in the private sector, it has not been previously used at VBA. When it is used, highly skilled and experienced people are a necessity. Given both VBA’s and the contractor’s unfamiliarity with using this methodology, the staff and contractor must learn the new tools and become proficient with them so as not to jeopardize the implementation of the replacement payment system, scheduled for 1998. We believe that this initiative is high risk because the payment replacement system timetable was based on unrealistic assumptions about the productivity and skills of newly-trained, inexperienced people, and the level of complexity of the task. Further, as I will discuss in more detail in a few moments, although VBA is in the process of developing software for its replacement system, our evaluation found that VBA is very weak in its ability to develop software and manage software-development contracts. This factor substantially increases the risks associated with this project. Another risk is that this project was not following sound systems-development practices. For example, VBA’s system development guidelines—policies and procedures used to design and develop computer software and systems—call for verification and validation of the system requirements before proceeding from one phase of system development to the next phase. VBA’s implementation of the standard systems-development process consists of four phases: planning, analysis, design, and construction. It has been demonstrated that proceeding to a subsequent phase without reviewing the work done in the current phase for correctness, consistency, and completeness will almost always adversely impact on the project’s cost, its performance, and the delivery schedule. VBA directed the project team to proceed into the system design phase, however, without completing this important first step. Further, the data model that is being used to develop the replacement payment system has not been completed, although this should have been done prior to proceeding into the system design phase. The incomplete requirements verification and validation and incomplete data model increases the risk that the system will be designed incorrectly. Also, VBA does not have cost-benefit information with which to assess its return on this investment. For example, it has not estimated the total amount of software that must be developed, or its cost. In addition to lacking the information to determine whether or not specific projects will pay off, VBA also lacks a process that ranks and prioritizes its investments in information technology as a consolidated portfolio. VBA is undertaking several projects simultaneously, without a full consideration of the resources required, costs, risks, and potential impact on agency operations. Current system-development activities—including addressing the year-2000 issue, data-center consolidation and related software conversion, and replacement of the benefits payment system—are all examples of investments that have not been ranked or prioritized. Year 2000. Like all other federal agencies—and private businesses—VBA must address the effects of processing information in light of the change of century. Most of the computer software in use today employs 2-digit date fields. Consequently, at the turn of the century, computer software will be unable to distinguish between the years 1900 and 2000, since both would be designated “00.” Industry and government experts have already gone on record saying that the effort to correct this problem will become extremely costly and time-consuming, and requires early and detailed planning. If the year-2000 problem is not addressed, it will render the vast majority of date-sensitive computer information unusable or obsolete. For example, calculations based on incorrect dates in service could result in errors in processing benefit checks in the compensation and pension programs. In VBA’s educational assistance program, VBA could send threatening debt-collection letters to veterans who do not actually owe money; charge incorrect interest rates to veterans or charge interest to veterans who do not owe money; or send debtor information to the Internal Revenue Service for refund withholding, to the federal government for wage garnishment, or to private credit firms to go on a veteran’s credit report. In our opinion, the year-2000 issue is an absolutely critical challenge that VBA faces over the next 2-3 years. Some of the computer code was developed more than 20 years ago, using nonstandard coding techniques. In some cases, the software documentation may be incomplete or nonexistent. It is essential that VBA develop and implement a strategy to address the inherent risks that accompany the year-2000 change. First, a sufficient number of experienced staff must be devoted to this task, especially since VBA must maintain its current software and service levels at the same time that it is correcting date-sensitive code. Second, it will need to complete the programming by 1998, since industry experts recommend that 1999 be reserved for thoroughly testing the year-2000 changes. Third, VBA must have a contingency plan that outlines alternatives for processing claims if systems are not corrected. Data-Center Consolidation and Related Software Conversion. In response to a request from OMB, VA and VBA are in the process of developing a strategy paper to reduce operational costs by consolidating their data centers. However, critical information in terms of costs and benefits is missing—information needed to determine how and when this should be done and how this effort ranks in terms of priority with competing demands, such as the year-2000 activities. Currently, VA’s data center is in Austin, Texas, and uses IBM computer equipment to process the Department’s accounting and financial management information related to administrative operations. VBA’s two data centers—Hines and Philadelphia—use mostly Honeywell equipment; the Hines facility primarily processes disability (compensation and pension) claims, while Philadelphia processes insurance claims. The joint VA/VBA data-center consolidation strategy paper is due to OMB in July. Because the data-center consolidation approach must also consider converting the current software to run on more modern computer equipment, added risks must be considered. Specifically, VBA is considering converting the Benefits Delivery Networksoftware—currently in use at Hines—to more modern computer equipment. The cost and time frames for this conversion will depend upon which of the three data centers is chosen as the site for Benefits Delivery Network processing. To date, two studies have been commissioned to evaluate the software conversion. The first, commissioned by VA, estimated the cost and time frames for moving the current Benefits Delivery Network to IBM equipment; the second, commissioned by VBA, assessed the feasibility of converting the Benefits Delivery Network software. The finding was that such a conversion is feasible, and could likely take 2-3 years to complete. Neither study, in our view, provides enough information on all three sites to adequately assess the investment needed, nor do they fully address General Services Administration (GSA) criteria for making software conversion decisions. Neither contains an analysis of alternative approaches or a full description of the cost, benefits, and risks of conversion. We have discussed our analysis with VA and VBA officials, and they agree with our assessment of these studies. VA has since hired another consultant to analyze the costs and benefits and to develop a strategy for data-center consolidation. Until the results of this study are available, VBA will not be able to identify the best approach to take. The conversion of the Benefits Delivery Network software must be carried out correctly in order to realize the potential benefits of data-center consolidation. This conversion will require much work and a dedicated staff with in-depth knowledge of the existing network software. In-depth knowledge of the Benefits Delivery Network software currently resides at VBA’s Hines data center. It will also be necessary, despite limitations on personnel and funding, to maintain the current network software and service level of operations while converting the software. The conversion risk will be further compounded by VBA’s need to address the year-2000 issue. Replacement of the Payment System. In addition to the previously mentioned risks associated with the replacement of the payment system, we believe that VBA did not adequately consider alternative approaches for achieving the reliability and additional functionality expected in the replacement. The Federal Information Resources Management Regulations require that agencies use their systems requirements as the basis for analyzing alternatives, commensurate with the size and complexity of the agency’s business needs. The regulation stipulates that agencies should calculate the total estimated cost of each feasible alternative, and assess the risks. Further, VBA recently acquired excess computing equipment from GSA to replace some of the equipment at Hines and Philadelphia. According to staff at both centers, the excess equipment is more reliable, has greater capacity, and is less expensive to maintain. This newer equipment allows VBA more time to analyze and assess alternatives because it makes the computing environment more stable. Lastly, critical to VBA’s ability to identify the true return on any of these information technology initiatives is the need for accurate and reliable cost information. Our analysis of VBA’s modernization obligations to date shows that the cost of these activities may be understated because VBA lacks a managerial cost-accounting system to track payroll benefits and indirect costs associated with modernization. VBA also appears to have miscategorized some items in its information technology budget as nonmodernization items when, in our opinion, they were modernization-related and should have been categorized in that way. In addition, VBA has not updated its modernization life-cycle cost estimate of $478 million in over 3 years. Therefore, precisely how much VBA’s systems modernization effort will ultimately cost taxpayers remains uncertain. VBA’s chief financial officer is currently in the process of developing guidance for implementing a cost-accounting methodology. Our work indicates that VBA has much to do to develop an investment strategy that can assure the Congress that scarce information technology dollars are being spent on the highest priority projects with the greatest potential for a substantial return on investment. The recent acquisition of excess equipment now provides VBA with an opportunity to effectively develop this kind of approach. VBA must expeditiously develop an effective investment process for selecting, controlling, and evaluating information technology initiatives in terms of cost, capability of the system to meet requirements, risk, timeliness, and quality; give top priority to addressing the year-2000 problem; and improve its accounting of obligations and costs associated with the modernization. Once technology investment processes have identified the most beneficial information technology projects in terms of cost, benefit, and return, the focus then shifts to the technical capabilities necessary to make the projects a reality. The agency must be able to quickly determine if it has the necessary in-house capability to develop the software for the new system or whether this development should be performed by an experienced contractor. In order to mitigate any risk of not being able to deliver high-quality software within schedule and budget, agencies must have a disciplined and consistent software-development process. Software development has been identified by many experts as one of the most risky and costly components of systems development. To evaluate VBA’s software development processes, we applied the Software Engineering Institute’s software capability evaluation methodology to those projects identified by VBA as using the best development processes. This evaluation compares agencies’ and contractors’ software development processes against the Institute’s five-level software capability maturity model, with 5 being the highest level of maturity and 1 being the lowest. As shown in attachment 2, these levels—and the key process areas described within each—define an organization’s ability to develop software, and can be used to measure improvements in this area. On the basis of our analysis, we determined that VBA is operating at a level-1 capability, defined as ad hoc and chaotic. At this level, VBA cannot reliably develop and maintain high-quality software on any major project within existing cost and schedule constraints, placing VBA modernization at significant risk. In this context, VBA relies solely on the various capabilities of individuals rather than on an institutional process that will yield repeatable, or level-2, results. VBA does not satisfy any of the criteria for a level-2 capability, the minimum level necessary to be able to significantly improve productivity and return on investment. For example, VBA is weak in the requirements management, software project planning, and software subcontract management areas, with no identifiable strengths or planned improvement activities. However, VBA can build upon its strengths in the software configuration-management and software quality-assurance areas. Our report on this matter is being issued soon and will contain recommendations to better position VBA to develop and maintain its software successfully and to protect its software investments. Specifically, we recommend in that report that VBA obtain expert advice to improve its ability to develop high-quality software and expeditiously implement a plan that describes a strategy for reaching the repeatable (i.e., level 2) level of process maturity, delay any major investment in new software development—beyond what is needed to sustain critical day-to-day operations—until the repeatable level of process maturity is attained, and ensure that any future contracts for software development require the contractor to have a software development capability of at least a level 2. VBA agreed with all but one recommendation. VBA agreed that a repeatable level of process maturity is a goal that must be attained, but disagreed that “all software development beyond that which is day-to-day critical must be curtailed.” VBA stated that the payment system replacement projects and other activities to address the change of century must continue. We agree that the software conversion and development activities required to address issues such as the year 2000 must continue; we would, in fact, characterize these as sustaining critical day-to-day operations. However, systems-development initiatives in support of major new projects, such as the replacement of the payment system, should be reassessed for the risk of potential delays, cost overruns, and shortfalls in anticipated system functions and features. We are pleased to see that VBA is already initiating positive actions relating to our other recommendations, including acquiring expert advice to assist it in improving its ability to develop high-quality software, consistent with criteria set forth by the Software Engineering Institute. The business and operational problems facing VBA are complex and not easy to resolve. VBA has begun to take action to improve agency operations and service delivery, but it has not yet implemented enough of the right kinds of actions—actions that involve developing a sound business strategy and the supporting plans, approaches, and measures to guide them into the next century. The need for more rigorous management and technical methods is critical if VBA is to successfully develop modern, efficient, and cost-effective business processes and computer systems that will allow them to deliver truly improved services to veterans. Mr. Chairman, this completes my testimony this morning. I would be pleased to respond to any questions you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Veterans Benefits Administration's (VBA) efforts to modernize and streamline its business processes. GAO noted that: (1) VBA is experiencing information technology acquisition problems and serious management and technical weaknesses; (2) VBA needs to adopt a clearly articulated business strategy to solve its service-delivery problems, coordinate its reengineering efforts, and cope with constrained resources; (3) VBA is developing strategic and information resources management plans that it will use in preparing its fiscal year 1998 budget request; (4) the National Academy of Public Administration will assess ways to make VBA claims processing more efficient; (5) VBA needs to develop performance goals and measures aimed at improving the quality of service; (6) VBA is reengineering its compensation and pension business processes, but it must improve its management of its information technology investments by developing critical cost, benefit, and risk information; (7) VBA has acquired some computer equipment before completing its reengineering analysis and may have to discard the equipment; and (8) VBA is not following sound software and systems development practices. |
According to March 2015 CPS data, an estimated 526,000 workers were employed in the animal slaughtering and processing industry. There were about 5,350 meat and poultry plants in the United States as of September 2015, of which around 1,100 were slaughter and processing plants, according to the USDA (see fig. 1). In 2014, more than 30 million beef cattle, 100 million hogs, 200 million turkeys, and 8 billion chickens were slaughtered in the United States, according to USDA’s National Agricultural Statistics Service data. Meat and poultry plants are generally designed for an orderly flow from point of entry of the living animal to the finished food product. Typically, the animal is brought to the meat or poultry plant and taken to the kill floor area, where the slaughter occurs. Workers and machines behead and eviscerate the animal, among other things, after which it is chilled for several hours. FSIS inspectors ensure that the carcass meets federal food safety standards. Workers and machines next process the carcass and may break it into small portions that can be transported directly to supermarkets. Slaughter and processing of meat and poultry require workers to perform a high number of repetitive motions. Although plants have increased automation, much of the work is still done by hand through the use of saws, knives, and other tools (see fig. 2). Workers may sustain many different types of injuries at meat and poultry plants (see fig. 3). To carry out its responsibilities under the Occupational Safety and Health Act of 1970 (OSH Act), OSHA establishes workplace safety and health standards, conducts inspections, investigates complaints from workers and reports of fatalities and serious injuries at worksites, and provides training and outreach, among other activities. To supplement its enforcement efforts, OSHA offers cooperative programs to help employers prevent injuries, illnesses, and fatalities in the workplace. OSHA conducts inspections in response to imminent danger, fatalities, catastrophic events such as hospitalizations, and worker complaints, and also selects worksites for programmed inspections based on injury incidence rates, previous citation history, or random selection. OSHA is directly responsible for setting and enforcing these standards for private sector employers, including meat and poultry plants, in 29 states, the District of Columbia, and 4 U.S. territories. The remaining 21 states and 1 territory have assumed responsibility for workplace safety and health under an OSHA-approved state plan. These “state-plan states” adopt and enforce their own standards (which must be “at least as effective” in providing safe and healthful employment as the federal standards). The OSH Act and OSHA’s regulations require covered employers to prepare and maintain records of certain injuries and illnesses sustained by their workers. Specifically, non-exempt employers are required to record information about every work-related death and each new work- related injury or illness that results in loss of consciousness, days away from work, restricted work or transfer to another job, or medical treatment beyond first aid. OSHA has established three different forms for employers to record injuries and illnesses: the Form 300 Log of Work- Related Injuries and Illnesses (log), the Form 301 Injury and Illness Incident Report (incident report), and the Form 300-A Summary of Work- Related Injuries and Illnesses. For each recordable injury or illness, the employer must record specified information on the log, including the worker’s name, job title, date of injury or illness, a brief description of the injury or illness, and, if applicable, the number of days the worker was away from work, assigned to restricted duties, or assigned to another job as a result of the injury or illness. Employers must also classify the injury or illness according to categories provided on the OSHA log. These categories include injury, skin disorder, respiratory condition, poisoning, hearing loss, and “all other illnesses.” In addition to the log, for each case employers must prepare an incident report, which includes descriptive information about the case, including details about the injury or illness, how it occurred, and the treatment provided. Finally, employers are also required to prepare a summary of all injuries and illnesses annually, which is to be posted at the workplace. Although these three forms are not routinely provided to OSHA, they must be kept for 5 years and provided upon request in certain circumstances, such as during an OSHA inspection or in response to BLS’s SOII. In addition, all covered employers, including those exempt from the routine recordkeeping requirements, must report all work-related fatalities to OSHA within 8 hours and all work-related in-patient hospitalizations, amputations, or losses of an eye within 24 hours. With respect to federal employers, such as USDA, each federal agency is generally required to establish and maintain a comprehensive and effective occupational safety and health program that is consistent with OSHA’s standards. The mission of USDA’s Safety and Health Management Division is to develop department-wide policies and promote and assist the development of USDA safety programs. USDA’s FSIS occupational safety and health program has safety and health committees that may analyze injury and illness data to identify the cause of an injury and develop preventative measures, among other things. FSIS safety and health specialists investigate safety concerns of FSIS inspectors in meat and poultry plants. BLS is responsible for collecting and distributing statistical information on issues related to labor, and one of the studies it conducts is the SOII. Employers’ OSHA logs are the main source of data for the annual SOII. In addition to collecting information on all recorded injuries and illnesses, the survey, which draws from a sample of about 230,000 employers, requests detailed case data from employers for injuries or illnesses that resulted in at least 1 day away from work. This detailed case data includes information on the type, or nature, of the injury or illness and the exposure, or event, that caused it. OSHA officials told us that they use these data to help them develop national and regional emphasis programs that focus on specific industries or worksite hazards, and to select high hazard workplaces to receive OSHA support and assistance. Within the Department of Health and Human Services (HHS), CDC’s NIOSH is the federal agency that conducts occupational safety and health research and workplace evaluations, and makes recommendations to prevent worker injuries and illnesses. At the request of employees, employee representatives, or employers, NIOSH may conduct a health hazard evaluation at a work site, such as a poultry plant, to determine if health hazards—such as chemical exposure or ergonomic hazards—are present. NIOSH provides assistance and information by phone and in writing to the requester and may visit the workplace to assess exposure and employee health. USDA, under the Federal Meat Inspection Act and the Poultry Products Inspection Act, is responsible for ensuring the safety and wholesomeness of meat and poultry products that enter interstate commerce. In 2013, over 3,700 USDA FSIS inspectors worked in meat and poultry plants to provide continuous inspection of each meat and poultry carcass and its parts. Among other regulations, USDA sets maximum line speeds for slaughter plants in order to allow FSIS inspectors sufficient time to perform proper inspection procedures. Injury and illness rates of total recordable cases in the meat and poultry industry declined from an estimated 9.8 cases per 100 full-time workers in calendar year 2004 to 5.7 cases in 2013, according to BLS data (see fig. 4). The decline is comparable to all U.S. manufacturing, which dropped from an estimated 8.2 cases to 5 cases per 100 full-time workers. However, the rates in the meat and poultry industry remained higher than those of manufacturing from 2004 through 2013. While injury and illness rates have declined in the meat and poultry industry, meat workers sustained a higher estimated rate of injuries and illnesses than poultry workers from calendar years 2004 through 2013, according to BLS data (see fig. 5). For example, in calendar year 2013 there were an estimated 7.8 cases per 100 full-time workers in meat slaughter and 5.4 cases for meat processing, compared to an estimated 4.5 cases for poultry slaughter and processing. The highest rates of injuries that resulted in days away from work in 2013 fell under the category of traumatic injuries—defined by BLS as injuries occurring from a single event over the course of a work shift—and included sprains, strains, and tears (see table 1). BLS collects data for injuries and illnesses that resulted in days away from work in order to understand the types of injuries and illnesses occurring and the events leading to them. BLS reports these data per 10,000 full-time workers— versus the rate per 100 full-time workers that is used for all injuries and illnesses. We are unable to show rates for these types of injuries over the past 10 years because BLS’s changes to some injury classifications in 2011 prevent direct comparisons over time. (Additional information on injury and illness rate estimates is contained in appendix I.) The events that led to injuries or illnesses that resulted in days away from work also varied (see table 2). In calendar year 2013, “overexertion and bodily reaction,” a term BLS uses to capture injuries and illnesses resulting from activities such as overexertion when lifting and repetitive motion, was cited most frequently as the event that led to an injury (estimated 40.1 per 10,000 full-time workers). This is consistent with the findings in our 2005 report that back sprains and strains among meat and poultry workers can be caused from lifting heavy objects or repetitive lifting of lighter objects. Some injuries have resulted in fatalities. According to BLS fatality data, 151 meat and poultry workers sustained fatal injuries in calendar years 2004 through 2013. Over that time, transportation incidents were the most frequent cause of death. For example, in calendar years 2011 through 2013, 46 meat and poultry workers sustained fatal injuries and 19 of these fatalities were caused by transportation incidents, such as being struck by a vehicle. Other causes of fatalities included violence from a person or animal, contact with objects or equipment, and exposure to harmful substances or environments. Meat and poultry workers experienced higher illness rates than other manufacturing workers (see fig. 6). In calendar year 2013, there were an estimated 159.3 cases per 10,000 full-time meat and poultry workers, compared to an estimated 35.9 cases for manufacturing overall. To better understand illness rates, OSHA classifies total recordable cases of illnesses into five categories, such as skin diseases and respiratory conditions, which BLS reports per 10,000 workers. In the meat and poultry industry, illnesses accounted for over one-fourth of all reported injury and illness cases in calendar year 2013. According to BLS’s website, working conditions can be difficult in the meat and poultry industry because workers are exposed to hazards that may lead to an injury or an illness. In 2013, BLS categorized the poultry industry (104.2 cases per 10,000) and part of the meat industry—animal (except poultry) slaughtering—(319.7 cases per 10,000) as high-rate industries for illnesses because these industries had the highest incidence rate of total illness cases, compared to other industries with at least 500 cases. USDA data show that its inspectors experience injuries and illnesses similar to those experienced by other meat and poultry workers. According to USDA’s 2014 workers’ compensation claims data, falls, slips, and trips were the most frequent causes of injuries among meat and poultry inspectors. USDA inspectors at plants we visited told us injuries and illnesses among inspectors vary, depending on whether they work in a meat or poultry plant. Specifically, inspectors told us that compared to inspectors in poultry plants, inspectors in meat plants sustain more cuts or lacerations because they make several cuts during hog and cattle inspections, while poultry inspections generally do not require any cuts to animal carcasses. Additionally, they said inspectors in poultry plants sustain more repetitive motion injuries due to faster line speeds. Some inspectors experience respiratory ailment symptoms due to chlorine used in poultry plants, according to USDA inspectors. Since our findings in 2005 on meat and poultry workers facing hazardous work conditions, NIOSH health hazard evaluations and academic studies have found that meat and poultry workers continue to face the types of hazards we cited, including hazards associated with musculoskeletal disorders, chemical hazards, biological hazards from pathogens and animals, and traumatic injury hazards from machines and tools. NIOSH’s findings are generally supported by OSHA documents and academic literature we reviewed, as well as by statements from workers and worker advocacy groups. (See appendix II for more information on NIOSH’s findings the from eight health hazard evaluations in poultry plants we reviewed.) In addition, other factors, such as employer emphasis on safety, worker training, and line speeds, may affect hazards and the risk of injuries and illnesses, according to literature we reviewed and the workers and officials we interviewed from federal agencies, the meat and poultry industry, and worker advocacy groups. Assessing hand activity and force used by Reviewing lab testing conducted by other causes of eye and respiratory irritation After NIOSH completes an evaluation, the agency typically makes recommendations to the employer on how to reduce or eliminate identified hazards and prevent related injuries and illnesses. According to officials, NIOSH disseminates the results of its evaluations as broadly as possible to help make industry- wide improvements even though evaluations focus on individual plants. According to NIOSH’s 2014 annual report, NIOSH received 209 requests and completed 33 field investigation reports and 118 consultation letters. Meat and poultry work continues to require forceful exertions, awkward postures, and repetitive motions for many job tasks, which can lead to injuries. In a 2015 health hazard evaluation of a poultry plant, NIOSH reported 59 percent of the 32 job tasks evaluated—from receiving to deboning—had average levels of hand activity and force above the American Conference of Governmental Industrial Hygienists threshold limit value, and carpal tunnel syndrome among workers likely resulted from repetitive motion and the forceful nature of these job tasks. Similarly, in a 2014 health hazard evaluation of a poultry plant, NIOSH found 41 percent of participants worked in jobs that had levels of hand activity and force above the American Conference of Governmental Industrial Hygienists threshold limit values. In a 2008 NIOSH health hazard evaluation of a turkey plant, NIOSH found that hanging and unloading racks of turkey franks (hot dogs) during processing increased the risk of musculoskeletal disorders due to awkward postures, repetitive motions, and heavy lifting. According to the evaluation, in raw and cooked production, workers hung and removed franks from racks on 50- inch metal rods weighing up to 38 pounds, and reported discomfort in their backs and shoulders. NIOSH’s recommendations included job redesign and job rotation from lifting to non-lifting tasks to alleviate these hazards. Workers we interviewed also said that the repetitive nature of meat and poultry work leads to injuries. For example, one meat worker with more than 20 years of experience told us he almost constantly experiences discomfort and pain in his hands and that he only gets relief when he is not working. Chemicals are a hazard in meat and poultry plants because they can create a harmful environment if they accumulate within an enclosed space. Findings from two NIOSH health hazard evaluations suggested that exposure to chlorine may be associated with self-reported symptoms of respiratory illness or eye irritation. In its 2012 evaluation, NIOSH found that employees in an exposed group were more likely to report certain work-related symptoms than employees in an unexposed group, including chest tightness; sneezing; blurry vision; and burning, itchy, or dry eyes. NIOSH also found that while chlorine levels met USDA requirements, chlorine-related by-products called chloramines were often implicated as a more likely cause of irritation. According to NIOSH, there is no valid air sampling method to consistently detect levels of this by-product in plants. Hazardous chemicals in meat and poultry plants also include ammonia, which is used as a refrigerant. For example, a state OSHA official told us process safety management related to ammonia handling is among the top three violations in the meat and poultry industry. An OSHA regional official said common injuries in the meat and poultry industry stem from chemicals such as chlorine and ammonia, among other things. Peracetic acid, an antimicrobial agent used to kill bacteria on poultry carcasses, may be harmful to workers. In November 2011 and January 2012, OSHA inspected a poultry plant after the death of a USDA inspector who worked there, including conducting chemical sampling at the plant. A regional OSHA official told us that OSHA suspected chemical exposure as the cause of death for the USDA inspector. According to OSHA and USDA officials, OSHA was unable to attribute the cause of death to any work-related conditions. In a June 2014 USDA letter to OSHA, USDA stated that it conducted additional air sampling at the poultry plant and did not detect any antimicrobial chemicals. However, according to an OSHA 2014 news release, OSHA cited the plant for, among other violations, failure to provide employees with information and training about the hazards of products that contain peracetic acid and bleach, as required by OSHA’s hazard communication standard. This citation was upheld by the Occupational Safety and Health Review Commission. The administrative law judge who upheld the hazard communication citation noted that employees told the OSHA compliance officer they had experienced respiratory ailment symptoms and rashes consistent with the exposure symptoms described in the chemical manufacturer’s safety data sheets, but the employer failed to train workers on chemical hazards, according to OSHA. Meat and poultry workers continue to be exposed to biological hazards associated with handling live animals, including contact with feces, blood, and bacteria, which can increase their risk for many diseases, according to a NIOSH evaluation and investigations we reviewed. In a 2012 health hazard evaluation, NIOSH investigated exposure to the pathogen Campylobacter in a poultry plant and found gastrointestinal illness appeared to be common, yet underreported, based on interviews with workers. In the live hang area at poultry plants, workers lift live poultry from the supply conveyer belt and hang the birds by their feet from a shackle conveyor belt. In doing so, workers can be covered with poultry feces and dust that can carry pathogens and other diseases, according to OSHA. NIOSH observed that the 20 air vents above the heads of the live hang area employees could spread contamination, and it advised the plant to modify the supply vents. NIOSH also observed inconsistent hand hygiene and use of personal protective equipment in the area and recommended the plant provide training to all employees. In response, the plant instituted a monthly safety training meeting; offered computerized training in English and Spanish, including a competency test; and provided required personal protective equipment at no cost to employees, including smocks and safety glasses, as well as optional respirators and face shields in the live hang area. According to NIOSH, the number of plant employees with confirmed cases of Campylobacter infection dropped from 21 in 2011 to 6 in 2013 once these preventative measures were implemented. In 2007, NIOSH assisted CDC and the Indiana, Minnesota, and Nebraska departments of health in their investigations of a progressive neurological disorder among workers in three hog slaughter plants, and in 2008 NIOSH conducted a health hazard evaluation at the hog slaughter plant in Minnesota. These plants had replaced saws with compressed air devices to reduce the risk of amputation, but the devices increased brain tissue splatter, causing a neurological disorder in several workers when they inhaled the animal matter, according to state officials. According to state and NIOSH investigators, workers at two of the plants also said line speed was a factor because the faster speeds meant they were unable to place the skulls completely on the device before triggering the compressed air, causing greater splatter. According to state officials, no new cases emerged after the three plants discontinued use of compressed air devices and the brain removal job task. Dangerous machines and tools remain a hazard within the meat and poultry industry, according to OSHA officials, workers we interviewed, and an academic study we reviewed. According to OSHA, moving machine parts can cause severe workplace injuries, such as crushed fingers or hands, amputations, burns, or blindness. OSHA officials we spoke with cited a lack of machine guarding—safety features on manufacturing equipment to prevent contact with the body or to control hazards from the machine—as a top safety violation at meat and poultry plants. Workers we spoke with experienced injuries from this hazard. For example, one meat worker showed us his scarred hand and said it had been caught in a machine, which crushed his finger and removed skin, necessitating a skin graft. Another worker’s apron was caught in a machine, which pulled her arm in before the machine could be turned off. As a result, she told us she can no longer work or perform daily activities with that arm. In addition to machinery, meat and poultry workers frequently use tools such as sharp knives, hooks, and saws. An academic study we reviewed examining the incidence of injuries, lacerations, and infections among poultry and pork processing workers employed by 10 companies found sharp tools were most frequently reported as sources of lacerations. A former meat worker we interviewed said he was injured twice by a neighboring worker’s hook when the other worker moved too close to him while trying to perform his task (see fig. 7). Emphasis on worker safety, training, and line speeds may affect the risk of injuries and illnesses in the meat and poultry industry, but the underlying conditions remain, according to literature we reviewed, NIOSH health hazard evaluations, and interviews with federal officials, workers, and representatives of worker advocacy and industry groups. Emphasis on worker safety: Emphasis placed on worker safety is a factor affecting workplace hazards, according to workers we interviewed and representatives from worker advocacy and industry groups. Some workers told us plants do not emphasize safety even when workers complain about hazardous conditions, but workers from two plants we visited said their company has a strong emphasis on worker safety. In at least half of the NIOSH health hazard evaluations we reviewed, NIOSH recommended or encouraged implementing worker safety programs or OSHA’s safety guidelines to help resolve identified hazards. Industry officials and a worker advocacy group told us plants should emphasize safety because it is in their best interest. Representatives from a worker advocacy group and industry officials told us that larger employers in the meat and poultry industry tend to have better worker safety practices than smaller ones. Representatives of meat and poultry industry associations also highlighted the implementation of worker safety programs in some plants over the last 20 years. In one study we reviewed, the authors suggested that workplace safety practices—such as the importance of safety to management, worker training, and proper use of safety equipment—can be modified to improve hazardous conditions in poultry plants. Training: Worker training is critical to mitigating hazards and ensuring safety in the meat and poultry industry, but it remains a challenge, according to industry officials and workers with whom we spoke. In at least half of the NIOSH health hazard evaluations we reviewed, NIOSH recommended implementing proper training of workers. However, industry officials said providing proper training can be a challenge because of different languages spoken by workers. For example, staff at two plants we visited said there are at least 20 languages spoken in their plants. At most of the plants we visited, managers told us that workers receive training during orientation and additional training may include annual training and working side-by- side with an experienced worker on the production line. Workers told us new hires receive video training on hazards and personal protective equipment, and acknowledge receipt of this training by signing an attestation document. Some meat and poultry workers told us the training is not always adequate. A hog plant worker said supplementary training should be provided on the job and at slower line speeds to ensure workers know how to do their jobs properly. One study we reviewed found that when workers in Nebraska and Iowa hog plants used an alternative method to accomplish a task, such as using different equipment, or performed a task in a different location within the plant, it was associated with increased risk of lacerations. The authors recommended expanded training and evaluation of tool sharpening procedures. Line speed: High line speeds resulting from increased automation and other factors may exacerbate hazards, according to plant workers and worker advocacy groups. In 2013, 15 stakeholder groups petitioned OSHA and USDA, asking OSHA to establish a “work-speed” workplace safety and health standard—a regulation that would set the number of animals or products processed per minute on a production line in relation to staffing levels—to protect workers in the meat and poultry industry. The petition also requested that USDA and OSHA ensure that worker safety be protected in any rulemaking related to line and work speeds in this industry. USDA acknowledged receipt of the petition in 2013 and officials told us the agency made several changes to the poultry inspection final rule that addressed some of the issues in the petition, namely not increasing the maximum evisceration line speed in young chicken plants. In 2015, OSHA denied the petition and cited limited resources as its reason for not conducting a comprehensive analysis and rulemaking. Plant workers told us that meat and poultry plants are primarily concerned with production, and employers do not want the line to slow down even when the plant is understaffed. Industry officials we met with disagreed. According to representatives of a meat industry trade association, staffing is typically increased when line speed increases, and it is important to staff the line so that plant workers and USDA inspectors can accomplish all work tasks effectively. According to NIOSH officials, increasing line speed and workers may increase the risk of “neighbor cuts” due to workers’ close proximity. OSHA and NIOSH officials told us line speed—in conjunction with hand activity, forceful exertions, awkward postures, cold temperatures, and other factors such as rotation participation and pattern—affects the risk of both musculoskeletal disorders and injuries among workers. NIOSH examined the effect of increased evisceration line speed on worker safety at one plant in a 2014 health hazard evaluation, but the agency could not draw conclusions about its impact. Specifically, NIOSH stated in a 2014 letter to USDA that it could not draw conclusions on line speed and safety because the amount of time between the first and second visits (10 months) was not sufficient for a change in workers’ health to appear and the manner in which the plant modified the production lines resulted in no change in exposure to risk factors for musculoskeletal disorders for any individual worker, among other things. NIOSH stated that the plant’s consolidated evisceration lines resulted in a reduction of the number of birds processed because the plant combined two separate lines at 90 birds per minute into one line operating at approximately 170 birds per minute. In a 2015 health hazard evaluation, NIOSH found hand activity and force above recommended levels, as noted above, and after the evaluation the plant automated several jobs; however, the agency concluded that musculoskeletal disorder risks remain for many workers. Workers and employers may underreport injuries and illnesses in the meat and poultry industry because of worker concerns over potential loss of employment, and employer concerns over potential costs associated with injuries and illnesses, according to federal officials, worker advocacy groups, and studies. As a result, the injury and illness rates discussed in the previous section may not reflect complete data. In 2009, we reported on concerns about underreporting across all industries, including discrepancies between BLS’s annual survey used to calculate injury and illness rates and other data such as medical records. Due to concerns about reporting and also in response to findings and recommendations from our work in 2005 and 2009, OSHA undertook its Injury and Illness Recordkeeping National Emphasis Program. For this program, OSHA inspected recordkeeping and reporting accuracy in a nongeneralizable sample of over 300 establishments, primarily in industries with high average rates of injuries and illnesses. A 2013 analysis of data from this program indicates that OSHA identified reporting errors at establishments it inspected, but the prevalence of underreporting cannot be determined based on these data. While OSHA and BLS recognize that underreporting exists, the extent is unknown. Underreporting continues to occur in the meat and poultry industry, according to worker advocacy groups and selected OSHA hazard alert letters we reviewed. Some meat and poultry workers may be less likely to report injuries and illnesses because of their vulnerable status as undocumented or foreign-born workers, according to federal officials and representatives of worker advocacy groups we interviewed. About 28.7 percent of meat and poultry workers were foreign-born noncitizens in 2015 compared to about 9.5 percent of all manufacturing workers, according to CPS data. The meat and poultry industry has been a starting point for new immigrants, as many jobs require little formal education or prior experience, according to a meat industry trade association. According to an OSHA official, worker advocacy groups, and plant managers at one plant we visited, some employers in the meat and poultry industry recruit refugees—in part, to replace undocumented workers—and some companies hire prison labor. Further, according to data from BLS, the meat and poultry industry had an hourly mean wage of $12.50 per hour in 2014 and an annual mean wage of $26,010. While above the federal minimum wage of $7.25 per hour, these wages are just above the 2014 federal poverty guidelines for a family of four. Workers who face economic pressures or have a tenuous immigration status may fear job loss or deportation if they report or seek treatment for work- related injuries and illnesses, according to federal officials and worker advocacy groups. For example, a community-based doctor told us that soon after he approved some injured meat workers’ work restriction requests, they returned and asked him to send a note to their workplace to end their work restriction because their employer had threatened to fire them if they could not do their jobs. Language barriers can also make it difficult for some of these workers to communicate about and report injuries, according to a worker advocacy group. In addition, NIOSH officials told us that in some cultures someone who reports an injury or illness is considered weak. Some meat and poultry industry employers may not record worker injuries and illnesses because of certain disincentives, according to federal officials and representatives of worker advocacy groups we interviewed. We previously found that generally, employers may not record workers’ injuries and illnesses because of disincentives such as fear of increasing their workers’ compensation costs or jeopardizing their chances of being awarded contracts for new work. Federal officials and representatives of worker advocacy groups we interviewed told us that some employers in the meat and poultry industry may underreport workplace injuries to keep workers’ compensation insurance premiums low. In addition, some employers may underreport to avoid triggering OSHA inspections or promote the image of a safe workplace, according to a worker advocacy group and managers at one plant we visited. At one meat plant we visited, workers recalled incidents in which supervisors told injured workers they were not hurt and to go back to work rather than report their injury. NIOSH officials and a worker advocacy group attribute some underreporting in the meat and poultry industry to lack of paid sick leave, which may cause injured or ill workers to stay on the job so they can get paid. For example, some poultry plants use point systems to track sick days and may penalize workers for taking too many, according to worker advocacy groups. A former meat worker who was injured on the job told us he was suspended for three days after taking time off from work to recover and was later terminated. Workers and representatives of worker advocacy groups told us these systems discourage workers from reporting their injuries and illnesses. OSHA officials also expressed concerns that employer-sponsored safety programs with incentives— such as those that offer rewards for no injuries over time—may pressure meat and poultry workers to not report work-related injuries and illnesses. Plant health units, which provide certain types of medical assistance to workers with injuries and illnesses at some plants, may also discourage reporting of injuries and illnesses, according to OSHA and worker advocacy groups. In an effort to maintain a clean safety record and avoid recording injuries in their OSHA logs, some plant health units may repeatedly offer first aid treatments—for example, compresses and over- the-counter painkillers and ointments—rather than refer workers to a doctor, according to two OSHA hazard alert letters, worker advocacy groups, and workers we interviewed. We were told about multiple incidents in which meat and poultry workers were punished for visiting the health unit too often or ignored by heath unit staff when they sought further medical care. For example: In 2014, OSHA sent a hazard alert letter to a poultry plant, recommending that the plant voluntarily take steps to improve its medical management practices. In the letter, OSHA identified practices that were contrary to good medical practice for managing work-related MSDs, including prolonged treatment by nursing station staff without referral to a physician. The letter included one example in which a worker made over 90 visits to the nursing station before referral to a physician. In 2015, OSHA sent a hazard alert letter to another poultry plant, also recommending voluntary improvements to the plant’s medical management practices. The letter noted that based on OSHA’s investigation, it appeared that the plant used its first aid station to prevent injuries from appearing on the plant’s OSHA log, such as by not referring workers to a physician for evaluation or treatment when appropriate. One worker told us that after he fell off a platform, the health unit provided ice and denied his request to be referred to a physician for x- rays. When he received an x-ray several days later, it confirmed that he had a fracture. A representative of a worker advocacy group told us about an incident in which a nurse gave a worker with an injured wrist some cream and sent him home. The worker sought medical treatment on his own, which confirmed that he had a fractured wrist. Meat and poultry industry representatives said underreporting is not a major issue, although some employers may not understand all of the reporting requirements. A meat industry trade association we interviewed noted that they organize seminars on reporting requirements and encourage employers to record all incidents in order to document improvement and avoid OSHA citations. Industry group representatives also stated that the decline in injury and illness rates discussed above is due in part to increased automation and industry efforts to enhance plant safety. OSHA officials concurred that increased automation in the industry has positively affected safety in limited areas of meat and poultry plants. DOL lacks key information about MSDs in the meat and poultry industry because of the way it gathers information on these conditions. It is particularly challenging to gather data on MSDs because the gradual nature of these injuries makes it harder for workers to recognize and report them, according to experts and worker advocacy groups. As discussed earlier, existing federal data and health hazard evaluations suggest that MSDs in the meat and poultry industry are common and can be disabling. In 2013, the incidence rate of MSDs that resulted in at least 1 day away from work was an estimated 39.2 cases per 10,000 workers in the meat and poultry industry overall and 25.2 cases per 10,000 workers in the poultry industry, according to BLS’s SOII. The 2013 incidence of carpal tunnel syndrome—an MSD—for cases that resulted in days away from work in the meat and poultry industry was an estimated 4.1 cases per 10,000, compared to 2.1 cases per 10,000 for manufacturing overall. A 2015 health hazard evaluation of a poultry plant by NIOSH found that over one-third of the workers who participated in the study had evidence of carpal tunnel syndrome. A 2014 NIOSH health hazard evaluation of poultry plant workers found that over two- thirds of workers interviewed reported experiencing pain, burning, numbness, or tingling in their hands over the preceding 12 months and that over half reported pain, aching, or stiffness in their backs during the same timeframe (see fig. 8). OSHA and worker advocacy groups have also documented the debilitating effects of MSDs. OSHA reports, for example, that MSDs can be painful and disabling, and may cause permanent damage to musculoskeletal tissues. Despite these concerns, DOL lacks information about MSDs in the meat and poultry industry because of how the data are collected. Specifically, BLS’s annual SOII only collects injury and illness details—such as the type of injury or illness—on cases that result in workers having to take days off from work. For example, the survey does not collect detailed information on MSDs that resulted in a worker being placed on work restriction, transferred to a different job, or continuing in the same job after medical treatment, making it more difficult to identify and track these MSDs. From 2011 to 2013, BLS conducted a pilot study, for which the SOII was modified to collect data for six selected industries (including food manufacturing) on the case circumstances and worker characteristics for cases where the worker was placed on work restriction or transferred to a different job. This pilot study found many of the MSDs occurring in the food manufacturing industry—which includes the meat and poultry industry—result in the worker being transferred to other jobs or restricted from activity in a current job without days away from work. For each calendar year from 2011 through 2013, the BLS study found that far more MSD cases in the food manufacturing industry resulted in job transfer or restricted work than in days away from work. For example, in 2013, the most recent data available, there were about 13,000 cases with job transfer or restricted work in this industry, compared to about 6,000 with days away from work. The OSHA log, which employers use to respond to BLS’s SOII, also does not specifically classify recorded injuries or illnesses as MSDs. For each injury or illness recorded on the log, OSHA requires employers to check off a column indicating whether it is an injury or one of four specified types of illnesses: skin disorder, hearing loss, poisoning, or respiratory condition. Otherwise, the employer is to check “all other illnesses” (see fig. 9). However, the OSHA log does not include a place where employers can check off whether a recorded injury or illness is an MSD. Such information would only be included in the incident report, which is maintained by the employer and generally not sent to OSHA or BLS. Attempting to compile MSD data using individual incident reports would be difficult. A former OSHA official said the agency added these columns to the log because OSHA determined that tracking these particular conditions was important to overseeing worker safety and health. Having these columns enables OSHA to more easily distinguish specific illnesses and conditions from other recorded cases. Before 2001 the OSHA log included a column for “repeated trauma” cases, which included some, but not all, MSDs, as well as some non- MSD cases such as hearing loss. OSHA revised its recordkeeping regulations in 2001 and replaced this column with two, one column for MSDs and another for hearing loss. However, the MSD column never went into effect, and in 2003, the agency deleted the MSD column after determining the column was not necessary or supported by the record. Some public commenters had also expressed concern that the column was not necessary, did not clearly define MSDs, and imposed a paperwork burden. Because the column was deleted, the current OSHA log does not specifically classify MSDs, although MSDs must be recorded as injuries or illnesses on the log if they meet the criteria in OSHA’s recordkeeping regulations. In 2010, OSHA again proposed a rule that would have required employers to check off in a separate column on the OSHA log whether an already-recorded injury or illness was an MSD, stating that information generated from the column would improve the accuracy and completeness of national occupational injury and illness statistics, provide valuable industry-specific information to assist the agency in its activities, inform workers and employers, and would not be cost-prohibitive. However, the Department of Labor Appropriations Act, 2012 prohibited any funds from being used for the MSD column proposed rule. The prohibition was extended by the 2013 appropriations act, but was not included in subsequent appropriations. Since then, OSHA has not attempted to add an MSD column to the OSHA logs. OSHA officials told us that it is vital to have accurate data on MSDs in the meat and poultry industry, and OSHA stated in its 2010 proposed rule to add a column to track MSDs that data from the column would assist the agency in targeting its inspections, outreach, guidance, and enforcement, among other things. BLS officials told us it would be a significant improvement if there were data that would quantify the extent of MSDs, as current data collection methods fall short. Although they stated they did not see a need for a column, representatives of trade associations for the meat and poultry industry we interviewed agreed that tracking MSDs at the plant level helps employers prevent and respond to these injuries. More MSD data would be helpful to OSHA and researchers, and a column on the OSHA injury log dedicated to MSDs could also make it simpler for employers to calculate their MSD rates, according to representatives of worker advocacy groups. Currently, employers must examine numerous entries in their OSHA injury log to calculate these rates. According to CDC, the first step in addressing health issues such as injuries is obtaining a full understanding of the extent of the problem. Federal internal control standards also call for accurate and timely recording to accomplish agency objectives. Without improving data on MSDs, BLS’s statistics on these conditions will remain limited and OSHA’s efforts to oversee employers and ensure workplace safety and health will continue to be hindered. DOL does not know the extent to which injuries and illnesses occur among meat and poultry sanitation workers—who may be employed directly by a plant or work for a separate contract sanitation company— because of how data on these workers are collected. Although they labor in the same plants and under working conditions that can be as hazardous as those of production workers, in 2005 we found sanitation workers employed by contract sanitation companies were not classified by BLS in the SOII as working in the meat and poultry industry. We concluded that OSHA, as a result, was not considering all injuries and illnesses at a plant when selecting plants to be inspected because some worker injuries and illnesses were not included in OSHA logs at those sites. We recommended that DOL require certain plants to provide OSHA with worksite-specific data of injuries and illnesses of workers employed by contract cleaning and sanitation companies so these data could be included in the rates OSHA uses to select plants for inspection. DOL did not implement this recommendation, citing a decision it had already made against requiring employers in the construction industry to collect contract worker data because of the burden to that industry, among other things. DOL has not taken action to improve data on sanitation workers, despite continued concerns expressed by OSHA about how sanitation work by both plant employees and contracted workers is one of the most hazardous occupations in the industry. Many sanitation workers work overnight during a plant’s “third shift” and are responsible for cleaning floors, machinery, and all product contact surfaces throughout the plant to comply with USDA requirements. Workplace hazards for sanitation workers employed directly by plants and those employed by contract sanitation companies include potential exposure to electrical, mechanical, hydraulic, and other sources of energy and potentially harmful chemicals. In 2013, for example, a 41-year-old sanitation worker was killed when he fell into an industrial blender at a meat plant, according to a fatality investigation report by the Oregon Fatality Assessment and Control Evaluation program of the Oregon Institute of Occupational Health Sciences. In 2015, according to an OSHA citation, a sanitation worker at a poultry plant lost two of his fingertips when a machine he was cleaning was mistakenly turned on. Two weeks later at the same plant, according to the same citation, a 17-year-old sanitation worker lost part of his leg when he was caught in a machine that lacked safety mechanisms. Another challenge in tracking injury and illness rates among sanitation workers is that even for those workers directly employed by meat and poultry plants (as opposed to those working for a contractor), the plants use different occupational titles for these workers on their OSHA logs. Employers record the injured workers’ job titles on their OSHA log, then, in its SOII, BLS codes these data using a standardized system. BLS officials told us that under this system these workers’ occupations may be listed as “janitors and cleaners,” “cleaners of vehicles or equipment,” or other occupational categories such as “production workers-all other” or “food processing workers-all other.” As a result of using these various occupational titles, which may cover regular production workers as well, DOL is not able to determine which injuries and illnesses pertain to meat and poultry sanitation workers. According to BLS, it also may not be possible to gather separate injury and illness data on those meat and poultry sanitation workers who are employed by contract sanitation companies. Under OSHA’s recordkeeping requirements, either the contract sanitation company or the plant may be required to track these workers’ injuries, depending on which entity is providing day-to-day supervision. As a result, injury and illness data for these workers in BLS’s SOII may be coded according to their employer’s industry—janitorial services, for example—and would therefore not be captured in injury and illness rates for the meat and poultry industry. Officials at four of the six plants we visited told us that the contract sanitation company they work with maintains the injury log for these workers. Officials at one contract sanitation company told us that both they and the plants with which they contract keep OSHA logs, and that the data the company sends to BLS from its OSHA log are coded under the “janitorial services” industry. BLS officials told us that it may not be possible to require contract sanitation companies to identify the industry of the companies they contract with because many of these companies provide services to a wide variety of businesses. As a result of how DOL gathers information on meat and poultry sanitation workers’ injuries and illnesses, OSHA has little data to work with when determining how to oversee these workers’ safety and health. Federal internal control standards call for agencies to track data to help them make decisions and meet their goals. According to OSHA, inaccurate data can lead to misleading conclusions regarding incidence, trends, causation, and effectiveness of abatement strategies. Because of limitations in the BLS data on injuries and illnesses of workers in meat and poultry plants, OSHA cannot fully assess the extent to which it is fulfilling its worker safety mission or successfully carrying out its enforcement and other activities. In addition, the agency may not be doing all it can to ensure sanitation workers are protected from workplace hazards. Several new developments may make it easier for OSHA to obtain more data on sanitation workers at meat and poultry plants. As of January 2015, employers covered by federal OSHA are required to report all work- related in-patient hospitalizations, amputations, and losses of an eye directly to OSHA within 24 hours. Previously, OSHA received more limited information on amputations and hospitalizations through direct employer reports. Reports on such cases involving meat and poultry sanitation workers may provide OSHA with additional details on injuries to this population. In addition, in October 2015, OSHA initiated two regional emphasis programs for the poultry industry in the southern United States. These programs—along with an ongoing regional emphasis program on poultry industry sanitation workers in the same region—mean OSHA will conduct more poultry plant inspections and gather more data on risks to sanitation and other workers, a former OSHA official told us. OSHA may also be able to work with NIOSH to gather information about sanitation worker injuries and illnesses. NIOSH officials told us that they recently were able to conduct studies in other industries because OSHA had negotiated their access after issuing citations. OSHA officials agreed that NIOSH reports could be useful to their inspections. NIOSH’s last health hazard evaluation of meat and poultry sanitation workers was conducted in 2002. At that time, NIOSH examined the use of sanitizing agents, such as bleach, in a meat processing plant, and analyzed their connection to respiratory disorders among five sanitation workers in that plant. All five sanitation workers reported symptoms consistent with known irritant effects of bleach, such as throat irritation and burning or stinging eyes, and the symptoms disappeared when the use of bleach was discontinued. Since then, NIOSH has not conducted any additional health hazard evaluations on meat and poultry sanitation workers, since they must rely on plant management, workers, or worker representatives to request a health hazard evaluation. However, NIOSH can also self- initiate studies on occupational safety and health issues and may conduct studies in response to requests from federal, state, or local agencies. In the absence of additional studies on meat and poultry sanitation workers, both OSHA and NIOSH may be missing an opportunity to learn more about the nature and extent of sanitation worker injuries and illnesses. While overall injury and illness rates have decreased since our last report, meat and poultry workers continue to face worksite hazards that put them at risk of severe and lasting injury. Obtaining complete information about injuries and illnesses in the meat and poultry industry continues to be a challenge that affects DOL’s ability to calculate accurate rates and ensure safe and healthy workplaces. Recent OSHA inspections suggest that more injuries occur than are reported, although the extent of underreporting is not known, and vulnerable workers such as immigrants and noncitizens may fear for their livelihoods and feel pressured not to report injuries. Our findings raise questions about whether the federal government is doing all it can to ensure it collects the data it needs to support worker protection and workplace safety. Strengthening DOL’s data collection on worker injuries and illnesses is the first step towards achieving that goal. Collecting accurate and complete data on MSDs is particularly important, because these disorders are common among this workforce and can be severe and debilitating. However, OSHA does not have a cost-effective method for distinguishing MSDs from other recorded cases, hindering OSHA’s efforts to ensure workplace safety and health. In addition, OSHA and BLS continue to face challenges determining the rates of injury and illness among meat and poultry sanitation workers. Until DOL is able to gather more complete data on sanitation workers in these plants, it does not have an accurate picture of total injuries and illnesses in the meat and poultry industry, and it cannot know how to best protect these sanitation workers. New developments provide an opportunity for DOL to learn more about the injuries and illnesses suffered by these workers and to develop ways to better track them. NIOSH, the federal agency responsible for researching workplace safety and health, may be well-placed to conduct an in-depth study on the injuries and illnesses experienced by this population. We are making the following three recommendations: To strengthen DOL’s efforts to ensure employers protect the safety and health of workers at meat and poultry plants, the Secretary of Labor should direct the Assistant Secretary for Occupational Safety and Health, working together with the Commissioner of Labor Statistics as appropriate, to develop and implement a cost-effective method for gathering more complete data on MSDs. To develop a better understanding of meat and poultry sanitation workers’ injuries and illnesses: The Secretary of Labor should direct the Assistant Secretary for Occupational Safety and Health and the Commissioner of Labor Statistics to study how they could regularly gather data on injury and illness rates among sanitation workers in the meat and poultry industry. The Secretary of Health and Human Services should direct the Director of the Centers for Disease Control and Prevention to have NIOSH conduct a study of the injuries and illnesses these workers experience, including their causes and how they are reported. Given the challenges to gaining access to this population, NIOSH may want to coordinate with OSHA to develop ways to initiate this study. We provided a draft of this report to the Secretary of Labor, the Secretary of Agriculture, and the Secretary of Health and Human Services for their review and comment. DOL and HHS provided comments, reproduced in appendixes IV and V, respectively. DOL generally agreed with our recommendations and stated that their implementation would make a difference in working conditions in the meat and poultry industry. DOL also noted that it may not be easy to implement our recommendations due to resource constraints. We are pleased that DOL agreed with our recommendations. HHS concurred with our recommendation to have NIOSH conduct a study of the injuries and illnesses of sanitation workers in the meat and poultry industry. HHS noted the previous difficulties NIOSH has had gaining access to these workplaces and the potential resource commitment involved in conducting such a study. In the report we acknowledged the access challenge and noted that OSHA has negotiated access for NIOSH in other industries, which is why we suggested in the recommendation that NIOSH may want to coordinate with OSHA. USDA generally agreed with our findings and recommendations, and provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the comments of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Labor, the Secretary of Agriculture, and the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-7215 or [email protected] or at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report (1) describes what is known about injuries, illnesses, and hazards in the meat and poultry industry since we last reported, and (2) examines what, if any, challenges the Department of Labor (DOL) faces in gathering data on injury and illness rates in this industry. To describe what is known about injuries, illnesses, and hazards in the meat and poultry industry since we last reported, we analyzed and reported survey data from DOL’s Bureau of Labor Statistics’ (BLS) Survey of Occupational Injuries and Illnesses (SOII) for calendar years 2004 through 2013 (the most recent year for which data were available). The SOII provides estimates of the number and frequency (incidence rates) of workplace injuries and illnesses by industry and also by detailed case circumstances, such as injury type and event, and worker characteristics for cases that result in days away from work, based on data from logs kept by employers (survey respondents)—private industry and state and local governments. Survey respondents provide counts for all recordable injuries and illnesses under Occupational Safety and Health Administration (OSHA) recordkeeping regulations. Survey respondents also provide additional information for a subset of cases, specifically those that involved at least 1 day away from work. In 2011, the BLS Occupational Injury and Illness Classification System and definitions of some injuries changed, thereby preventing direct comparison of case characteristics over time. We report estimates of detailed case characteristics from various injuries and illnesses, such as carpal tunnel syndrome, that resulted in days away from work in the most recent calendar year available, 2013. To report SOII data from the meat and poultry industry (using North American Industry Classification System (NAICS) code 31161 for the animal slaughtering and processing industry) and manufacturing overall (NAICS codes 31-33), BLS provided estimates of each industry’s injury and illness incidence rates and their associated relative standard errors. All estimates produced from the analysis of the SOII data are subject to sampling errors. We express our confidence in the precision of the results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples the respective agency could have drawn. For estimates derived from BLS’s SOII data, we used the agency-provided relative standard errors to estimate the associated confidence intervals. All estimates we report have the associated 95 percent confidence interval provided. We also reviewed BLS’s Census of Fatal Occupational Injuries (CFOI) data for calendar years 2004 through 2013, the most recently available data, to better understand the number of fatalities and their circumstances, including causes in the meat and poultry industry. The CFOI is a federal-state cooperative program that has been implemented in all 50 states and the District of Columbia since 1992. According to BLS, the CFOI program uses diverse state, federal, and independent data sources to identify, verify, and describe fatal work injuries to ensure counts are as complete and accurate as possible. CFOI compiles a count of all fatal work injuries occurring in the United States during the calendar year. Fatal injury counts exclude illness-related deaths unless precipitated by an injury event. As previously stated, in 2011 the classification systems and definitions of some data elements changed, and this change may not allow comparing CFOI data within specific fatality categories to previous years. Therefore, we reported total fatalities over a 10-year period rather than annual totals within each major fatality category. To assess the reliability of BLS SOII and CFOI data, we reviewed documents related to the data sources, such as BLS’s Handbook of Methods, and we interviewed agency officials knowledgeable about these data. We found that SOII and CFOI data were sufficiently reliable for our purposes in generally reporting estimated incidence rates of injuries and illnesses in the meat and poultry industry and manufacturing overall, describing injuries and illnesses, and reporting total fatalities in the meat and poultry industry. We also obtained and reviewed fiscal year 2014 workers’ compensation data from USDA’s Food Safety and Inspection Service (FSIS) to describe the injuries, illnesses, and hazards experienced by inspectors in meat and poultry plants. USDA’s workers’ compensation data includes injuries and illnesses from workers who filed a workers’ compensation form. A limitation of this data source is that workers’ compensation data likely undercounts injuries and illnesses. To assess the data’s reliability, we interviewed agency officials, reviewed documentation on FSIS’s workers’ compensation program, and checked the data for discrepancies. We found the data were sufficiently reliable for our purposes. We reviewed literature from peer-reviewed journals, Centers for Disease Control and Prevention’s (CDC) National Institute for Occupational Safety and Health (NIOSH) health hazard evaluations, and OSHA guidance documents on factors that affected injury and illness rates and hazards in the meat and poultry industry since we last reported. We conducted a literature search for studies that examined factors affecting injury and illness rates, as well as hazards in the meat and poultry industry. Based on our literature review, we reported information from four peer-reviewed studies. To identify studies from peer-reviewed journals, we conducted searches of various databases, such as Web of Science, Scopus, and ProQuest and requested suggestions from officials we interviewed. We further limited our review to studies on meat and poultry workers only; therefore, we excluded any studies that made comparisons between workers in the meat and poultry industry and other industries. From this review, we identified 19 studies that appeared in peer-reviewed journals between 2005 and 2015. Of the 19 studies, we excluded two studies that summarized findings from two NIOSH health hazard evaluations that we had previously obtained and reviewed. We noted that 8 of the 17 studies relied on a community-based approach to obtain participants rather than recruiting them directly from plants. These studies focused exclusively on a subset of the worker population within the meat and poultry industry, namely women and Hispanic or Latino poultry workers in North Carolina. We included observations from 1 of the 8 studies, which focused on Hispanic poultry workers, but we noted study limitations in the report. We included findings from 3 of the other 9 studies: (1) a study on a neurological disorder experienced by workers in three hog plants that illustrated hazards related to animals, (2) a study on lacerations in meatpacking describing hazards related to machines and tools, and (3) a study on laceration injuries experienced by meat and poultry workers employed by 10 companies representing 22 poultry plants and 8 pork plants to illustrate factors that may affect injury and illness rates in the meat and poultry industry. We identified and reviewed eight NIOSH health hazard evaluations published from 2007 to 2015 that describe various hazards in poultry plants, as well as factors that may affect injury and illness rates in this industry. NIOSH officials told us the agency has not conducted similar evaluations in meat plants to those it conducted in poultry plants because the agency has not received any requests to do so. Findings from NIOSH evaluations we reviewed are not generalizable to illustrate hazards in all poultry processing plants in the United States. We reviewed OSHA guidance documents on hazards in meat and poultry plants, including OSHA’s e-Tool for poultry processing which details workplace hazards by job task in the poultry industry. To examine the challenges DOL may face in gathering data on injury and illness rates in this industry, we reviewed relevant federal laws and regulations, as well as OSHA documentation. We also reported BLS data and reviewed documentation on musculoskeletal disorders (MSD), including a pilot study on cases involving job transfer and work restriction from data collected from 2011 through 2013. We obtained and analyzed data on worker demographics from the Current Population Survey (CPS), jointly sponsored by BLS and the Census Bureau, from March 2015, the most recent data available. We assessed the reliability of CPS data by reviewing documentation, interviewing knowledgeable agency officials, and performing electronic data testing, and determined the data were sufficiently reliable for our purposes. Because the CPS estimates are based on probability samples, they are subject to sampling error. For the CPS estimates in this report, we estimated sampling error and produced confidence intervals using the methods provided in the technical documentation of CPS’s March 2015 supplement. To report wages of meat and poultry workers, we used estimates of average annual and monthly wages for slaughterers and meat packers (Standard Occupational Classification code 513023) in the animal slaughtering and processing industry (NAICS 31161) and their associated relative-standard errors from BLS’s Occupational Employment Statistics (OES) survey data from May 2014. We used the relative-standard errors to calculate 95 percent confidence intervals for estimates derived from BLS’s OES survey data. We found the BLS and CPS data were sufficiently reliable for our purposes. We interviewed OSHA officials—including officials from all 10 regional OSHA offices—and FSIS and NIOSH officials. We also interviewed Georgia Tech Research Institute staff who conducted research on sanitation workers in the poultry industry to learn about hazards faced by sanitation workers in the meat and poultry industry. Moreover, to describe challenges in gathering data on sanitation workers, we reviewed a 2002 NIOSH evaluation on sanitation workers and interviewed one sanitation company that provides cleaning services in the meat and poultry industry. Of the two other sanitation companies we approached, one declined to meet with us and other company did not respond to our request. To respond to both objectives, we interviewed representatives from stakeholder groups and visited several meat and poultry plants. We identified and interviewed 13 stakeholder groups (unions, worker advocacy groups, and industry trade organizations) with sufficient knowledge about worker safety in the meat and poultry industry, in part based on previous work as well as referrals from other stakeholder groups. We also reviewed information obtained from these groups. These stakeholder groups were the American Federation of Government Employees/National Joint Council of Food Inspection Locals, the Government Accountability Project, Legal Aid of North Carolina, the National Chicken Council, the National Council for Occupational Safety and Health, the National Turkey Federation, Nebraska Appleseed, the North American Meat Institute, Oxfam America, the Southern Poverty Law Center, Student Action with Farmworkers, the United Food and Commercial Workers International Union, and the U.S. Poultry and Egg Association. We attended a meat industry conference on worker safety, as well as a worker safety conference organized by the National Council for Occupational Safety and Health. Finally, we visited six meat and poultry plants—selected to cover a mix of species (chicken, turkey, hog, and cattle) and states (Missouri, Nebraska, North Carolina, and Virginia), as well as union and non-union plants and two plants that were part of the FSIS pilot project—where we met with plant management, USDA’s FSIS management and inspectors, and plant safety and health staff. We also met with current and former workers, who were selected either by unions, worker advocacy groups, or plant managers. The information gathered in these interviews is not generalizable to all plants or workers. To assess DOL’s efforts based on the information gathered in interviews and site visits, we used federal internal control standards that call for agencies to track data and to undertake accurate and timely recording to accomplish agency objectives. We conducted this performance audit from December 2014 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The National Institute for Occupational Safety and Health (NIOSH) conducted eight health hazard evaluations published from 2007 to 2015 that describe various hazards in poultry plants. Table 3 presents a summary of selected findings and recommendations from these health hazard evaluations. Selected findings on hazards are not generalizable to all poultry processing plants in the United States. This table is not intended to be a complete list of NIOSH’s findings and recommendations; for more complete information, refer directly to the cited NIOSH health hazard evaluation. In addition to the contacts named above, Blake Ainsworth, (Assistant Director), Mary Denigan-Macauley (Assistant Director), Eve Weisberg (Analyst-in-Charge), Nkenge Gibson (Analyst-in-Charge), Leah English, Monika Gomez, Susan Aschoff, James Bennett, Sarah Cornetto, and Lorraine Ettaro made significant contributions to this report. Also contributing to this report were Diann Baker, Carl Barden, Carol Bray, Angela Clowers, Marcia Crosse, Grant Mallie, Sheila McCoy, John Mingus, and Michelle Sager. Food Safety: USDA Needs to Strengthen Its Approach to Protecting Human Health from Pathogens in Poultry Products. GAO-14-744. Washington, D.C.: September 30, 2014. Food Safety: More Disclosure and Data Needed to Clarify Impact of Changes to Poultry and Hog Inspections. GAO-13-775. Washington, D.C.: August 22, 2013. Workplace Safety and Health: OSHA Can Better Respond to State-Run Programs Facing Challenges. GAO-13-320. Washington, D.C.: April 16, 2013. Workplace Safety and Health: Further Steps by OSHA Would Enhance Monitoring of Enforcement and Effectiveness. GAO-13-61. Washington, D.C.: January 24, 2013. Workplace Safety and Health: Multiple Challenges Lengthen OSHA’s Standard Setting. GAO-12-330. Washington, D.C.: April 2, 2012. Workplace Safety and Health: Enhancing OSHA’s Records Audit Process Could Improve the Accuracy of Worker Injury and Illness Data. GAO-10-10. Washington, D.C.: October 15, 2009. Workplace Safety and Health: Safety in the Meat and Poultry Industry, While Improving, Could Be Further Strengthened. GAO-05-96. Washington, D.C.: January 12, 2005. Food Safety: Weaknesses in Meat and Poultry Inspection Pilot Should Be Addressed Before Implementation. GAO-02-59. Washington, D.C.: December 17, 2001. Community Development: Changes in Nebraska’s and Iowa’s Counties with Large Meatpacking Plant Workforces. GAO/RCED-98-62. Washington, D.C.: February 27, 1998. | DOL is responsible for gathering data on workplace injuries and illnesses, including those in the meat and poultry industry, where workers may experience injuries and illnesses such as sprains, cuts, burns, amputations, repetitive motion injuries, and skin disorders. GAO was asked to examine developments since its 2005 report, which found this industry was one of the most hazardous in the United States and that DOL data on worker injuries and illnesses may not be accurate, and recommended that DOL improve its data collection. This report (1) describes what is known about injuries, illnesses, and hazards in the meat and poultry industry since GAO last reported, and (2) examines DOL's challenges gathering injury and illness data in this industry. GAO analyzed DOL data from 2004 through 2015, including injury and illness data through 2013, the most recent data available, and examined academic and government studies and evaluations on injuries and illnesses. GAO interviewed DOL and other federal officials, worker advocates, industry officials, and workers, and visited six meat and poultry plants selected for a mix of species and states. The information gathered in these visits is not generalizable to all plants or workers. Injury and illness rates in the meat and poultry slaughtering and processing industry declined from 2004 through 2013, similar to rates in all U.S. manufacturing, according to Department of Labor (DOL) data (see figure), yet hazardous conditions remain. The rates declined from an estimated 9.8 cases per 100 full-time workers in 2004 to 5.7 in 2013. However, these rates continued to be higher than rates for manufacturing overall. Meat workers sustained a higher estimated rate of injuries and illnesses than poultry workers, according to DOL data. Centers for Disease Control and Prevention (CDC) evaluations and academic studies have found that workers continue to face the hazardous conditions GAO cited in 2005, including tasks associated with musculoskeletal disorders, exposure to chemicals and pathogens, and traumatic injuries from machines and tools. DOL faces challenges gathering data on injury and illness rates for meat and poultry workers because of underreporting and inadequate data collection. For example, workers may underreport injuries and illnesses because they fear losing their jobs, and employers may underreport because of concerns about potential costs. Another data gathering challenge is that DOL only collects detailed data for those injuries and illnesses that result in a worker having to take days away from work. These detailed data do not include injuries and illnesses such as musculoskeletal disorders that result in a worker being placed on work restriction or transferred to another job. Further, DOL does not have complete injury and illness data on meat and poultry sanitation workers because they may not be classified in the meat and poultry industry if they work for contractors. Federal internal control standards require agencies to track data to help them make decisions and meet their goals. These limitations in DOL's data collection raise questions about whether the federal government is doing all it can to collect the data it needs to support worker protection and workplace safety. GAO is making three recommendations, including that DOL improve its data on musculoskeletal disorders and sanitation workers in the meat and poultry industry. DOL, USDA, and CDC concurred with GAO's recommendations. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.